Department of English
Quinnipiac University
Hamden, CT 06518
Adam.Katz@quinnipiac.edu
However paradoxical it may seem, I venture to suggest that our age threatens one day to appear in the history of human culture as marked by the most dramatic and difficult trial of all, the discovery of and training in the meaning of the ‘simplest’ acts of existence: seeing, listening, speaking, reading…
Louis Althusser, Reading Capital, 15
An act of pure attention, if you are capable of it, will bring its own answer. And you choose that object to concentrate upon, which will best focus your consciousness. Every real discovery made, ever serious and significant decision ever reached, was reached and made by divination. The soul stirs, and makes an act of pure attention, and that is a discovery.
D.H. Lawrence, “Etruscan Places,” 55.
In a recent Chronicle of Love and Resentment, Eric Gans frames the ethical function of language as follows:
In all the years in which I have attempted to explain GA in writing and in speech, I have tended to place the major emphasis on representation, and in particular on “formal representation” or language. One of the points I have insisted on is that human language is qualitatively different from animal “languages”; the researches and insights of such as Terrence Deacon have essentially ended the debate on this point. But it follows from my very “definition” of the human as the species that poses a greater problem to its own survival than the totality of forces outside the human community that the primary transformation of the proto-human into the human was ethical. Language and more broadly, representation emerged, per the originary hypothesis, to defer conflict, not to provide a cognitive or ratiocinative tool. But in the configuration of the originary event, the moral model of the reciprocal exchange of the sign is just as indubitably unique a human creation as language, and indeed more essential to the success of the event—and to the consequent emergence of our species. The urgent need that the event fulfills is to find a model of behavior that can defer violence within a community for which one-on-one animal hierarchy no longer provides an adequate solution. Eric Gans Chronicle 431, “Originary Ethics.”
The question of appropriate emphasis aside, the distinction Gans posits here, between the sign as a formal representation of a transcendent object, on the one hand, and the sign as a result, means or manifestation of reciprocity seems to me one that the originary hypothesis itself transcends. In other words, “formal representation” is itself ethical, is indeed the origin and resource of any ethics, so that ethics cannot be thought outside of it. At the same time, formal representation cannot be thought outside of ethics, since the “formality” of the representation lies in the shared attention it effects, and in this shared attention lies any ethics. In shared, or joint attention, is the fundamental equality-on-the-scene that constitutes the human. All the resources we need for thinking about ethics lie in joint attention, in our ability to point to something, and approaching ethics in this way might enable us to create more minimal, more pared down, ethical vocabularies.
To start with, if we can fold moral reciprocity into the shared attention constitutive of the sign and scene, couldn’t we say that what is immoral and a denial of reciprocity is whatever interrupts that shared attention? There are two ways shared attention can be interrupted: first, through some kind of distraction; second, through some kind of fixation. Distraction (distracting others; allowing oneself to be distracted) tears us away from the scene of joint attention, opens the possibility of unchecked approach to the object, and thereby demands a renewed, necessarily risky effort to redirect attention to the object—that is, distraction causes regression to a higher threshold of significance; fixation involves tearing oneself away from the scene and, ultimately, turning the other participants into objects of rather than participants in, one’s now singular attention. Joint attention involves some equipoise amongst the participants of the scene: each knows that the other(s) could advance towards the object while accepting the signs given and given off by the other as warranty that they won’t without sufficient advance warning. Distraction, we could say, is the introduction of noise into the information thereby exchanged—either not putting forward sufficiently unequivocal signs oneself or subtracting from the univocality of those put forward by other(s). Fixation, meanwhile, is the securing for oneself a system of processing information that reduces all information to univocality, on terms not subject to reciprocal exchange. Both distraction and fixation abort the scene, but both are also complementary possibilities of the originary structure of joint attention: the actuality or fear of distraction favors the formation of fixations, and so the ethical problem consists less in preventing than in recuperating and interrupting distractions and fixations. If we consider that anyone enters a scene by following a line of attention—by looking at what someone else is looking at and deferring appropriation as the other does in order to continue looking—one has not fully joined the scene until that line of attention has passed through oneself, and has been seen to do so. In other words, attention is not joint until all the participants show, through signs, that they are letting the object be so as to see what it has to show, to hear what it has to say—in which case, each participant must be inspected, so to speak, or credentialized, by having the sign they put forth validated. For one’s joining of the line of attention to become evident and thereby accepted as legitimate, that attention must first land on oneself as its object—in other words, each new participant on the scene represents a potential interruption of shared attention At this crucial point upon which one’s entry into the scene depends, one can only avoid becoming a distraction and potential source of fixation in others by doubling that attention back on oneself by joining it, becoming a sign and hence invisible, insofar as others are redirected back to the object through you. In that case, you will have shown others that the line of attention passes through your own eyes; unless, of course, your self-referentiality simply intensifies your distractiveness. Whether a distraction has taken place will depend upon whether those attended to or, in Louis Althusser’s term, “interpellated,” as potential objects of resentment or desire will have restored the line of attention by incorporating the interruption into the scene’s founding sign. Perhaps an analogy would be helpful here. The neuroscientist Daniela Schiller has discovered that that
memories are not unchanging physical traces in the brain. Instead, they are malleable constructs that may be rebuilt every time they are recalled. The research suggests, she said, that doctors (and psychotherapists) might be able to use this knowledge to help patients block the fearful emotions they experience when recalling a traumatic event, converting chronic sources of debilitating anxiety into benign trips down memory lane.
If the originary event is event and sign together, then there is no event without the sign being both emitted and iterated by all the participants on the scene—just as memories are not completed until they are recalled, or represented (and are therefore never complete), the event will only have taken place once it is represented in the sign. There is thus a lag during which the event both has and has not taken place, and the sign, much like the therapeutic experience Schiller hopes to employ, uses that lag to convert chronic sources of violence into benign signification. Since in this lag “before” and “after” have not yet been completely settled, the benign sign can be secured both before and after the traumatic event has taken place. And what makes the sign “benign” is not that it excludes content (as Schiller remarks, the traumatic memories are recalled) but that violence marking the event remembered is not what makes it memorable. Instead, the sign can embed the violence in some other elements of the memory that mitigate its fearfulness by turning the event into a sign—a sign, I would suggest, that prevents the one violent event from becoming the first in a series of such events compulsively suffered and/or committed.
I would call this restoration of the line of attention the “loop” in the line of attention, and undergoing this looping is what I would call “ostentation,” which is where ethical being is located. Whether one can undergo or go through the loop depends upon the group’s ability to see you as restoring the line of attention as well as your ability to do so—ethics involves both ostentation and conferring a completed ostentation upon others, or the conversion of attentionality into intentionality. And this means that whether one has distracted or patched together the continuity of the line of attention, or whether one has proactively identified a break or fixated upon (and thereby aggravated) the source of the break in that line can only be known in the aftermath on a new, converted scene of joint attention.
We keep the line of attention going by language learning—every loop in the line of attention involves an encounter of idioms. While it would be absurd to say that each of us speaks our own language, I think it makes perfect sense to say that at the margins we all differ in the emergent idioms we speak and that it is at such margins that real ethical questions emerge: when I think I’m following your discourse and taking the next “logical step” but you think I am falsifying your most basic intuitions then a difference in language has emerged. Michael Tomasello, along with many others has made the argument that we learn language not as collections of single words with discrete meanings that then get combined in sentences, or as a series of grammatical rules applied to single instances of language use, but as pre-packaged chunks of discourse—phrases, formulas, commonplaces—that we can repeat appropriately insofar as we occupy scenes of joint attention with our elders. Over time our language base extends through discovering iterable patterns in and analogies with those chunks, noticing similar contexts, mixing chunks, exchanging elements of the chunks we are familiar with, and so on. This process never ends, continuing, say, for academics, when we read the sentences of one thinker through the sentences we have assimilated from another. We can identify patterns because we can re-arrange center-margin relations on scenes and still recognize a scene as the “same” scene (when I am done speaking and someone else takes “center stage,” it will still be the “same” scene); and we can identify analogies because the materials of one scene can be referred back to other scenes. Iterating (repeating differently) chunks, patterns and analogies, that is, is the way we follow by repairing the line of attention. The novel sentences linguists note that we are able to compose are, really, then, variant constructions, and “thinking” a process of transforming chunks and commonplaces into such variant constructions.
Ethical being involves not so much learning the language of the other, or teaching the other one’s own language, because “language” is not a static entity that can stand still long enough for it to be the same language once it has been learned as it was when it began being taught. Rather, ethics involves learning the emergent language that arises at the margin or rough edges of the convergent idioms. Joint attention is always liable to lapse, prey to distraction and fixation, must always be monitored and re-engaged—when we mistake ourselves and each other it turns out that we have not been attending to the same thing after all, and our recourse is to attend to what we normally attend from: language, or the possibility of joint attention, an indication of faith in the capacity of shared deferral.
If new language is always emerging on the margins of any semiotic encounter then two things follow: first, that this emergent language upsets the rough symmetry of the originary scene and, as on that scene itself, the new language can only be engaged through the kind of asymmetry aimed at symmetry I have elsewhere called “firstness”; and, second, the hierarchical articulation of language, from phonemes meaningless in themselves but capable of meaningful combinations, to morphemes that are meaningful within larger words, to words which have meaning but minimally so until they are placed in sentences, which take on their full meaning in discourses, and so on—this entire hierarchical organization which makes the lower levels invisible (we don’t notice phonemes, and barely individual words, when we are discussing serious issues) undergoes dislocation and the elements at different levels become visible and “out of joint.” If we place these two characteristics of emergent language together, it follows that firstness, or what we can consider the irreducibly pedagogical dimension of language, involves attending to the normally subsumed “joints” of language. Language is irreducibly pedagogical because in any joint attention, someone must have pointed first in a more or less articulate anticipation of the interest of the other(s)—this indicative initiative is the interpellative act that introduces one into the attentional loop. At the same time, this pedagogical dimension is, we could say, “flickering,” insofar as once attention has been joined that initial asymmetry is integrated into the newly formed joint attention—and joint attention is self-authenticating, recognizing only such precursors and origins as it needs to sustain itself in the face of distractions and fixations.
The joints of language include far more than the elements of speech—there is tone, for example, and also within speech itself there is phonosemantics, but beyond that there is gesture and posture, which in turn open up onto broader tacit understandings of context. In interpersonal interaction, that is, the entire embodied mind (or, perhaps, minded body) is engaged in the manipulation of attention, while in the more advanced semiotic forms (writing and electronic communications) the senses are brought into play in various ways. There are many joints that an utterance or sign can be out of. The originary hypothesis, though, provides us with an effective way of studying, first of all, those semiotic elements closest to the originary scene: posture and gesture. The originary gesture is a gesture of aborted appropriation; doesn’t it make sense, then, to see all gestures and postures as “aborted” versions of some threatening activity? Take a large, powerful looking man walking confidently down the street, head up, chest out, with long stride and arms swinging long and fast enough to knock over an average sized individual. On the one hand, he is taking up space, defending a territory, intimidating potential trespassers—but, more fundamentally, he is claiming this space by suggesting, through gesture and posture, not only what he might do if that space were to be transgressed but also therefore drawing attention to what he isn’t doing, the possible actions he has aborted—for example, embarking on the unrestricted conquest of space. He is claiming some space, not all of it, and if he claims more than his “fair share,” that just means one’s notion of fairness, based on modern, civic notions of equality, is incommensurable with his notion, based, implicitly, on one’s right to what one can defend. He does, though, have an understanding of fairness and is constituting a scene around himself through his gesture and posture.
Gesture and posture do not seem to work the same way as the levels of spoken language—they not are composed of a system of intrinsically meaningless elements, nor are they components of larger systems of meaning. But they are composed into larger wholes we call “situations,” “character,” “personality,” and “culture”; and, as I suggested earlier, it may very well be that phonemes and, more generally, the sounds of language are not as meaningless as post-Saussurian linguistics assumes. Furthermore, we can integrate gesture and posture into the semiotic systems of speech, writing and beyond by considering, first, that gestures and postures are ultimately ostensive gestures of deferral, and that any meaning conveyed through the higher speech forms also involves an act of deferral. Eric Gans’s analysis of the primary linguistic forms in The Origin of Language makes it possible to see the imperative as a deferral of the ostensive, under conditions where an ostensive would likely fail and exacerbate the violence it is meant to stay (interestingly first turned into an imperative by the one obeying the command); the declarative, meanwhile, is a deferral of the imperative, when that speech act is unlikely to be fulfilled (and hence risk a violent situation without resolution). There are many different kinds of ostensives—simple pointing at a desired or interesting object, promises, greetings, expressions of gratitude, and so on—and of imperatives—orders issued under emergency conditions, orders issued pursuant to some legitimate authority, commands received from divine agencies, or transmitted within a community and obeyed by generation after generation—and the analyses based on the principle I am proposing would get very complex. Indeed, all these forms of signifying are embedded in single acts, embodying knowledge on different levels—if I stand aside from that aggressive male occupying the center of the street, making my own, limited claim to space and signaling a refusal to challenge him (learning an emergent language of gesture and posture), while, perhaps shaking my head at the evident barbarism and in order to give a moral tincture to my resentment, I am most likely doing all that on the level of gesture and posture itself, not in sentences I speak to myself. All acts have an element of deferral (even if minimal or diminishing) insofar as one thing is done, and not another, and it is done in one way, not another, thereby holding back possibilities towards which the form of the act gestures. Even in the most intellectualized conversation, such gestural exchanges proceed unnoted, sometimes emphasizing or accentuating, sometime subverting, positions taken in the overt communication. And, finally, this mode of analysis can be carried forward into writing and electronic communications insofar as we realize that these take place within disciplines, genres and institutions with rules that can be violated and boundaries that can be transgressed and that each signifying act makes sense by heightening or singling out respect for at least some of these rules and boundaries, even if this respect is shown by violating and transgressing others.
The insistence upon the entanglement of mind and body in language events evokes the Sapir-Whorf hypothesis that there are no universally shared cognitive concepts outside of language: that time and space, as well as cultural and moral concepts, are all encoded in the grammar and semantics of specific languages.(1) It seems to me that this claim, if taken to its logical conclusion, would lead us to assert the singularity not just of every language but of every speech act: why should we, that is, assume that shared cognitive concepts undergird different uses of the same words any more than uses of “similar” words across languages? And yet it is very difficult to simply reject the question, since in our post-metaphysical world thought seems bound up with language in ways that continue to surprise. I therefore consider it fortunate that the hypothesis is alive and fairly well, drawing the interest not only of literary theorists and poets, but cognitive linguists. This is the case even though the hypothesis cannot really be formulated coherently—if you want to claim that we can only think in terms of the grammar of a particular language you are already begging the question of the relation between “thought” and “language,” which the hypothesis nevertheless depends upon (if we were to just assert the simpler “thought is language, language is thought [and that is all ye need to know on earth?)],” the hypothesis would evaporate). I will suggest in a little while that the best use of the hypothesis is to identify the “emergent language” I am arguing is central to ethics, but a good way to get there is through a discussion of one way in which the hypothesis has proven generative for some cognitive linguists.
Dan Slobin sums up a problem, derived from linguistic theories of grammaticalization, and that has been engaging cognitive linguists, when he points out that “[t]here is a cline of linguistic elements from fully lexical content words to fully specialized grammatical morphemes” (426). The cognitive linguists Dedre Gentner and Lena Boroditsky use this distinction to modify the Whorfian problem by proposing what they call a “division of dominance”:
At one extreme, concrete nouns—terms for objects and animate beings—follow cognitive-perceptual dominance. They denote entities that can be individuated on the basis of perceptual experience. At the other extreme, closed-class terms—such as conjunctions and determiners—follow linguistic dominance. These meanings do not exist independent of language. Verbs and prepositions—even “concrete” motion verbs and spatial prepositions—lie between. Unlike closed-class terms, they have denotational functions, but the composition of the events and relations they denote is negotiated via language. (216-7)
So, at the first extreme, thought is independent of language, which in practice we can take to mean first, easily and uncontroversially translatable; and, second, readily reducible to ostensive, referential gestures. The Sapir-Whorf hypothesis wouldn’t hold within this domain of dominance: we could assume a word in any language that would be roughly equivalent to, say, “tree.” At the other extreme though, where possible relations are constructed intra-linguistically, the dependence of thought on language would be the greatest. We have no reason to assume, for example, that in other languages things are figured “out.” As Slobin goes on to point out, though, the process of grammaticalization relativizes the distinction between the domains of dominance, since content items make their way down the “cline” to grammatical ones. “Basic verbs,” according to Slobin,
appear at the beginnings of grammaticization clines because, when they are used in a conversational context, they contrast with the more specific verbs that could be used in that context, thereby signaling to the hearer that those more specific meanings were not intended. This opens the way for the kind of pragmatic inferencing and reanalysis that lie at the heart of grammaticization.
Given these facts, it is evident that the special character of grammaticizable notions has its origin, in part, in the lexical items from which grammatical items are prone to develop. That is, the “open class” is already organized into general and specialized terms—and this division can be accounted for by quite ordinary psycholinguistic and communicative processes… Why are such words prone to grammaticize? Because of their generality they are both highly frequent and likely to be used in contexts in which the speaker does not intend to communicate a specialized meaning. (433-4)
Slobin goes on to give the example of verbs designating “taking”—if I use a more specialized verb like “grasp” or “seize,” I wish to draw attention to the manner of taking possession; if I use a more general verb like “take” I thereby draw attention to a more general domain of activity (watching over, accepting responsibility for, and so on). “Take” is now primed to enter the process of grammaticalization, which might involve becoming an “auxiliary” verb or, in the case of “take” entering into a range of idioms (take over, take on, take it to, take down, etc.).
The process of grammaticalization is surely shaped, as Slobin says, “by the online demands on the speaker to be maximally clear within pragmatic constraints and maximally efficient within economy constraints, and by online capacities of the listener to segment, analyze and interpret the message” (431), which is to say by the interest in establishing a sustainable form of joint attention—but the initial, implicit, marking of the distinction between more general and more specialized semantic domains that Slobin places at the origin must first of all involve a shift in attention that involves an experience of learning. I would go even further, and say that that semantic domain could not have been imagined until it had been opened up and shared. The more generalized semantic domain recuperates some interruption in the attentional loop—I would assume it is noticed when further descent down the cline would exacerbate the distraction caused by the interruption and that it wards off the danger of imminent fixation. What happens here is that a possibility within language opens a possibility within thought—the distinction between “take” and “seize” makes it possible to imagine “taking responsibility,” or “taking one’s time.”
At this point, originary thinking moves beyond cognitive linguistics, because we must assume that there were a few words and then many, and those few words must have covered more semantic space than the later, specialized split-offs; even more, the earliest words must have been more thoroughly embedded in the imperative and gestural-postural worlds than we can easily reconstruct now. The original “take,” then, must have included much of what was to be distributed to more specialized semantic domains. To “take,” must have meant to acquire and possess in accord with sacred purposes and ritualized practices. The initial move towards grammaticalization, then—that transformation of a word, whose meanings have been evacuated and given over to specialized terms, into a word covering newly imagined cognitive, social and moral domains—is a retrieval of the originary content of the word. This is a retrieval forward, not a recovery of the identical meaning: “taking responsibility,” “taking time,” “taking over,” and so on don’t return us to that earlier ostensive world but, rather, create new ostensive possibilities of deferral, where promises can be made (in a promise, one allows oneself to be “taken”), initiatives “taken,” obligations incurred. In other words, a backward ascent is a precondition of further descent down the cline, as the new mode of thinking in language takes over or becomes common possession. And this also means that the development of chunks and commonplaces, on the one hand, and the “de-chunking” that we can call “thinking,” on the other, are complementary modes of language development and language learning: “thinking” is initiated when a piece of a chunk “sticks out” (because the chunk is used mistakenly, because it is learned so well as to become material rather than transparent, because it collides with other chunks…), is withheld from its normal circulation, and opens up a new grammatical and semantic domain. This withholding from normal circulation, in fact, conforms to the structure of deferral, whereby an act is converted into a gesture—gestures must be composed so as to indicate that a particular movement could be completed in many other ways, but is instead being (in)completed in this way. And the (in)completion in the case of the aborted act/gesture can also be generalized to a range of as yet unanticipated situations, whereas acts are bound to a restricted context.
The consequences for ethics of this reciprocal implication and generation of language and thinking are as follows. Ethical concepts like “equality” and “fairness” are not really ideas that people, as folk psychology would have it, believe in and act upon—for one thing, such terms only have meaning within some frame of reference; for another thing, “believe in” and “act upon” are extremely imprecise ways of determining our relations with signs. These and other terms take on their meanings not only negatively, as the rejection of specific, and threatening, forms of inequality and unfairness, but positively, as exemplified by those who resisted or renounced the benefits of the inequality or unfairness in question—and in doing so iterated the originary event by deferring some kind of (potential, perceived) communal self-immolation. Such figures serve as iconic signs that are made the center of ritual (whether religious, cultural or political), and the actions of those who commemorate and imitate those figures are themselves privileged. Generative ethical concepts, then, are those that clarify the activities of such moral exemplars. Ethical advances, then, are events that deepen a particular mode of deferral by bringing within the scope of deferral a precondition for the act that has been subject to the prior deferral. Such advances become possible and necessary when the original deferral has eroded, leaving members of the community with the choice of abandoning or restoring it—but the restoration must consist of more than insistence on the continuance of the frayed practice; it must diagnose and denounce the cause of the erosion and establish preventive measures against its recurrence. Hence the need for “deepening.” When such a restoration or return is successful, those who represented it—undoubtedly extremely divisive figures at the time—will have been those who saved the community. The rest of the community will then be taken in tow by those who respond most vigorously to the call of those founders, and by the practices, norms and institutions they found (or the community will become prey to its own indiscipline). This also means that the community need not hold itself to the same degree of rigor as the founders, only to revere them and preserve the possibility of perpetual renewal—the role of monasticism in Christian societies can perhaps be understood in this way: the point is not that everyone should be chaste, eschew worldly goods and honors, and devote themselves exclusively to searching the will of God, but rather that those called to such renunciation should be honored as models to imitate, to the extent that one can, in one’s daily life.
A study in ethics, then, can be reduced to the study of those practices, norms and institutions, which is to say, a study of disciplines, in the fullest sense of the word(2): including both systems, individual and collective, of self-control, self-refinement and self-overcoming, and institutions of inquiry, characterized by constraints upon observation and vocabularies of analysis and description. Indeed, ideas are nothing more than prompts to the construction of disciplines. And disciplines are constructed within language, also in the fullest sense of that word—from the ostensive, postural/gestural realm, up through imperatives, interrogatives and declaratives to discourse. The initiation of a discipline involves the recuperation of a word (again, in the fullest sense—including phrases, idioms, and grammatical constructions), a word with a disciplinary history, and turning that word into the center of a new set of constraints. And, I would argue, that recuperation is a step up the “cline,” or what I will now call an “upcline,” in which a word is deliberately removed from a circulation that splinters its uses, and placed within a new circulation, or idiom, that treats the word as a prompt for a new hierarchy of declaratives, imperatives and ostensives. (Perhaps the prototypical example of the foundational disciplinary move is Plato’s withdrawal of “good” from its circulation as an adjective indifferently applied to “meal,” “athlete,” “house,” etc., to a much more restricted use as the Good.) Disciplines are spaces set aside for continuous language learning, a perpetual, deliberate break from ostensive and imperative uses of signs, uses that habitually provide smooth paths to satisfaction, to new ways in which, as Tomasello puts in in describing the young child’s internalization of the linguistic symbol, we can become aware of how the “current situation may be attentionally construed by ‘us’” (13).
To institute a word in this way is to construct a rule for its use, a rule designed to prevent other possible usages, and a rule drawn from some actual or imagined domain of prior usage. Perhaps the earliest such procedure is the ubiquitous ban on pronouncing the name of God—the first word. If the name of God is interdicted, then a system of circumlocutions and euphemisms, drawing on putative “attributes” and “effects” of God, must be elaborated. Interdicting the name of God is one step beyond (a deepening of) the interdiction on appropriating God. The rules of politeness and civility work in a similar way, expressing through procedures applied to tone, gesture, and so on, one’s commitment to not do certain things. In each case what is deferred is the blasphemous, coarse, brutal, barbaric—some form of violent appropriation. As Philip Rieff has argued, though, any system of interdictions includes a system of remissions: profane and forbidden practices that are allowed within a circumscribed space, like Bakhtin’s “carnivalesque.” The internal disintegration of a system of sacrality comes when the remitted practices are used to point out the “hypocrisy” of the defenders of the sacred and to reverse the causality between deferral and authority—that is, instead of authority being conferred upon those who submit themselves to greater ordeals of deferral, the system of deferral and discipline becomes seen as a mere justification of the privileges enjoyed by those with authority. Of course, there will often be quite a bit of accuracy in such charges, but we can distinguish attempts to dismantle discipline from calls to restore it insofar as the former turn their satirical weapons against the latter. A priori hostility towards the sacred and sacred authority, the central fixation of the modern world, and unremitting mockery of such authority, the source of its distractions, always serve the purpose of releasing inhibitions in the name of nature. Those who have been liberated from inhibitions while still in possession of the entire vocabulary of discipline towards the destruction of which they have dedicated themselves have considerable advantages over the defenders of deferral and discipline. This is the advantage exploited for quite a while by Marxism in which, in Michael Polanyi’s terms
[s]cientific skepticism and moral perfectionism join forces… in a movement denouncing any appeal to moral ideals as futile and dishonest. Its perfectionism demands a total transformation of society; but this utopian project is not allowed to declare itself. It conceals its moral motives by embodying them in a struggle for power, believed to bring about automatically the aims of utopia… The power of Marxism lies in uniting the two contradictory forces of the modern mind into a single political doctrine. Thus originated a world-embracing idea. In which moral doubt is frenzied by moral fury and moral fury is armed by scientific nihilism. (59-60)
Understandably, it took liberal citizens quite a while to find means of defending themselves from this combination of frenzy and scientific nihilism disguised as oscillations between skepticism and certainty; perhaps they have not yet completely learned how to do so.
A proliferation of disciplines comes in the wake of the decline of a shared sacrality—God is no longer named, but the word/names that found the disciplines (“society,” “unconscious,” “nature,” etc.) are nonetheless attributes of that which arrests some act of appropriation and instigates shared acts of attention. Any discipline is based upon some form of deferral, and upon some innovative rule of language and, therefore, any discipline adds to our collected means of discerning ethical exemplarity. Even if we take the most extreme apparent counter-example, the euphemistic “language rules” by which the Nazis conspired to avoid overt references to the mass murder they were committing in the very documents facilitating and recording that mass murder, we can see why this is the case. In itself, an Oulipo-style language game devised to discuss horrific acts obliquely through rules regarding the uses of synonyms, periphrasis and other methods might be very interesting and instructive, providing markers of the devastation crimes against humanity wreak upon our language (or, perhaps, revealing the vulnerability of our bureaucratic language to totalitarian infiltration); in cases where one is trying to aid victims of such acts, such language games might be a necessarily tactful approach towards enabling the survivor to arrive at his own language for describing what he has undergone. They are only objectionable when they serve to distract others from the fixation upon the destruction of disciplines driving the murderers, playing upon most people’s unwillingness to assume that anyone would be capable of such acts (or, less generously, most people’s desire not to accept the responsibility such knowledge would bring with it) and consequent willingness to put the most charitable construction upon the euphemisms. In that case, the Nazis’ resort to such language rules indeed contributes to our capacity for ethical assessment because the euphemisms break down in response to a demand to be provided with the referents that even bureaucratic discourse must ultimately supply.
Euphemistic discourse is ultimately a question of bureaucracy, central to market and democratic societies where everything is recorded and innumerable disputes must be publicly adjudicated according to ever accumulating rules. Since bureaucracies must both record and neutralize conflicts, euphemism is intrinsic to their functioning. They do so by ensuring all positions are included in the system and ensuring that any position that can’t be named by bureaucratic vocabulary is rendered invisible and unthinkable. But part of that system is the mechanisms by which the invisible and unthinkable can be named within the system. As more of these processes go online and are governed by algorithms resistant to influence by the subjects of bureaucracy, it seems likely that important ethical questions will cluster around what is a set of globalized processes of normalization. The terms I have been developing here can address the question of bureaucratic language rules as follows: the way bureaucracies direct the attention of the bureaucrats themselves can be along the same line of attention as that provided for their subjects, or the subjects can be turned into objects of attention, treated as nothing more than potential distractions. The way we can tell the difference is by determining whether language learning takes place between the two sides. Can the bureaucratic terms be used outside of the bureaucracy and are bureaucratic terms permeable enough to allow for the borrowing of outside terms? Even more precisely, do the bureaucrats (in the very broad sense that anyone who publicly assesses others, including doctors, teachers, lawyers, and so on, could be considered one) treat their “charges” in such a way that assumes that those charges might one day, however distant or unlikely, themselves enter the bureaucracy and perform assessments themselves. These questions can only be answered performatively. The sign that the answer is “no” will be that outside idioms are diagnosed rather than integrated, and treated as symptoms of whatever is abnormalized by the bureaucrats. But such symptomatic approaches to the subject licenses exceptions to established procedure, exceptions that then become codified within established procedure. The licensing of exceptions is the licensing of the very desires that must be deferred if rulers are to be fit to rule—in this case, what might be the central modern desire, to rule humans effectively, without resistance, as the scientist handles his objects. And the licensing of those desires entails the discrediting of the source of their prohibition, and this discrediting must become a fixation since any interference in the fulfillment of the once prohibited desires comes to be treated in the most inimical way. In that case, we can see totalitarianisms, leaving aside the specific ideologies informing them, as titanic explosions of forbidden desires, above all the desires to dominate absolutely, murder, avenge real and perceived injuries, and humiliate. That totalitarian movements are disinhibitory rather than disciplinary thus becomes transparent. But we need not set aside the specific ideologies, since Nazism and Communism are themselves nothing more than elaborate justifications for such a “weaponizing” of bureaucracy; but bureaucracies must now be assumed to be potential “incubators” of such outbreaks, and their monitoring a significant, if not central, concern of ethical thinking.
Proceduralist tendencies in modern art are part of this process of monitoring. Producing a work according to an arbitrary rule is preparatory for noting and countering bureaucratic potentialities: the random undoes the increasingly precise probabilism claimed, if not actually accomplished, by contemporary bureaucracies. In that case, any language game, or constraint, any placing of some piece of language within a restricted circulation, serves an ethical purpose insofar as the paradoxicality of the rule is not neutralized (and when it is, the rule can be re-presented along with the paradoxicality of that attempt at neutralization). It is decreasingly possible to accept the claim of ancient revelations that the arbitrary has been removed from those revelations, leaving nothing but a historically and anthropologically necessary “content”—indeed, the originary hypothesis introduces into any revelation the paradox of the originary scene, that what is named as significant in any revelation is presumed to already have the significance it can in fact only have via the act of naming. In that case, we can start from the opposite extreme, with the assertion that any constraint, even or especially the most arbitrary (say, a rule against using a particular letter), provides ethical benefits. Some kind of “upcline,” necessarily results from any constraint, as words (in the broadest sense) are shoved out of joint and become newly available objects of attention, as long as the shared attention it facilitates doesn’t serve to distract attention from some fixation hoping to evade scrutiny. Further, any upcline enhances our capacity for joint attention, which can then always, even if slightly, be transferred to the reconstruction of other constraints. It is also helpful to consider how much of the arbitrary is involved in disciplinary constructions that have come to seem reflections of nature—we can readily see, by now, that, for example, Freud’s re- and restricted circulation of the “unconscious,” and Marx’s of “labor power,” had a great deal of the arbitrary to them (and there was much that was intellectually generative in that arbitrariness)—but such insights come much harder with prominent disciplines closer to home. If one were inclined to devise a “proof” for the originary hypothesis, it seems to me we might find one in the fact that it works equally well if we assume that those on the originary scene reflected, with the skill of the great realist novelists, the nature of all those congregated; or, on the other hand, that the originary sign was a contingent, arbitrary construct, arrived at in desperation, thoroughly pragmatic, devoid of ontological claims, and providing nothing more than a rule to cling to. Without an originary hypothesis, other disciplines in the humanities and social sciences, to differing extents of course, smuggle in a lot of naturalized assumptions in order to keep the arbitrariness of their constructions at bay—they presuppose a good deal of humanness in the constructions of the human. There’s no reason to object to that, but recognizing the ingredients of arbitrariness and idiosyncrasy in the mainstream disciplines might make them more generative of insights and less tolerant of complacent euphemisms. In the end, perhaps the best way of distinguishing the production of new, upclining idioms from obfuscatory euphemisms is that only former can read the latter into a new space by inhabiting the paradoxes of both, indeed all, sets of rules.(3) The euphemism is the tribute evil must pay to joint attention and it can always be taken up so as to direct attention towards the complex of distraction and fixation constitutive of the euphemism.
Ethical behavior in everyday life, then, relies on the creation of little disciplinary spaces out of the available semiotic material. This doesn’t mean that all those who “just” treat others as they would like to be treated are unethical; it just means that what, at a given moment and in a given situation, they take to count as “treating others as they would like to be treated” is “down cline,” further grammaticized, from the procedures devised to refuse to participate in a newly perceived form of unequal treatment. Even more, in any action recognizable as ethical, we will be able to identify some “upclining,” however minimal and however mixed with “remittances.” It follows that ethical inquiries are best devoted to the exemplary practices initiating new procedures and the way those practices are imagined across various spaces of discipline and deferral. Some potentially exemplary practices must be winning out over others—others that allow for too little remission, or that are too blatantly arbitrary. (I happen to believe that a community that, every week, chooses to omit a specific sound or letter from its oral or written communications, will be, all other things being equal, more ethical than a community that doesn’t. Probably happier, too. But such a procedure would not address what the community itself takes to be ethical dilemmas, something new disciplinary procedures must at least gesture towards. Some distinction between newly generalized/retrieved term and more specialized instantiations will be necessary to constitute the space.)
My discussion, thus far, conflates religious, cultural, esthetic and scientific practices under the category of the “disciplinary.” I think this is essentially correct, insofar as all of these disciplinary spaces, to the extent that they become more rigorously disciplinary, make society more ethical; even more, they contribute to a vision, like Michael Polanyi’s, of a free society comprised of a vast extent of overlapping disciplinary spaces (what Polanyi calls a “society of explorers”). And while differentiations need to be made across these practices, the best way to do so is to attend to the ways in which they impinge upon one another, rather than trying to establish an a priori ethical hierarchy amongst them. Scientific disciplines that account, in their initiation of new learners, for the esthetic dimension in the attraction and intelligibility of any theory and in the construction of hypotheses, and that honor the “awe” at the unknown that leads one to replace everyday modes of perception with concepts that address the intangible will “upcline” more than those which imagine themselves to be purely “declarative” spaces. Artistic communities that account for their overlap with the sacred and the scientific, perhaps in inventing language rules that mimic the sacred, or in using mathematized procedures in generating rules of composition, will upcline more and be more ethical than those that don’t; and those religious practices that invite the esthetic and rename as God’s work the discoveries of science while reminding scientists that their disciplines, too, have scenic origins, will have more upcline than those that don’t. In each case language learning is maximized because other discourses are allowed to induce the kind of semiotic crises that allow for new problems of language to emerge between us.
David Olson, near the conclusion of his study of the role of education in literate and therefore bureaucratic societies, asserts that
In a modern bureaucratic world, knowledge, virtue and ability take on new form. Institutions such as science preempt knowledge, justice systems preempt virtue, and functional roles preempt general cognitive ability. Thus, ability, knowledge and virtue are construed and pursued less in the form of private mental states and moral traits of individuals than in the form of competence in the roles, norms, and rules of the formal bureaucratic institutions in which they live and work. (288)
Olson is, of course, right, and yet “we can conclude with x degree of certainty that, given the assessment of evidence according to the established procedures, subject to follow up research regarding certain inconclusive results…” cannot completely replace “I think that…”; nor can “the aggregate well-being of the community, measured according to the following metrics and subject to international comparison, is likely to decline…” completely replace “I refuse to…”. “If,” as Olson continues, “these institutions fall apart, personal competence and private virtues tend to vanish with them,” it is equally the case that without reserves of personal competence and private virtues that are not solely dependent upon these institutions, the institutions will fall apart. (Why, after all, shouldn’t modern bureaucratic institutions, once up and running, be perpetually and automatically self-reproducing?) These reserves are located, in part, in other institutions that import, as I have suggested, idioms at odds with the discipline in question (if all institutions are equally exhausted, though, these reserves will not be forthcoming). Change within disciplines come, as Thomas Kuhn most famously argued, from the emergence of anomalies that can no longer be reconciled with the prevailing research program. It’s hard to see how the competency in roles and rules Olson refers to would enable one to identify, look for, or even anticipate the existence of such anomalies, though, since anomalies will by definition undermine those roles and rules. Only living in anomalies and paradoxes, originary thinking, iterating the oscillation between model and rival, sign and object, name and meaning, can sustain the generativity of the disciplines. Upclining is originary thinking—retrieving the word forward, withholding a more general meaning yet to be instantiated from the general semantic circulation, is the way of living in anomalies and paradoxes. It may be true that “I think that…” and “I refuse to…” are no longer worth very much but we can “post” ourselves within disciplines, iterating distracting entrances that allow for attention to be directed to the discipline as such, and attention to disciplinarity is the form attention to scenicity takes today.
The definition of ethics as upclining might, finally, allow us to revisit the critique of White Guilt and victimary discourse that has become such an important part of Generative Anthropology. White Guilt could, perhaps, be reframed as a deepening of the modes of deferral constitutive of liberalism and democracy, that is, as a targeting of previously invisible preconditions and predilections that make the members of free societies more likely to advocate or remain silent in the face of violence against despised minorities. Language rules like referring to the “N-word” and, more generally, avoiding verbal formulations that single out members of particular groups and make them more likely to face scrutiny that might accord with collective probabilities (say, the greater proportion of black perpetrators of violent crime in the US) but would be unjust when applied to the individual, could be ethical advances. For this to be the case, though, the rules would have to apply to anyone within the “game,” that is, anyone who has “standing” to hold another to the rules—whether this would mean that, for example, the “N-word” would be equally off-limits for blacks, or that it would be assumed that any black individual would have his own “blacks” before whom he would be expected to experience a form of White Guilt, or whether a complementary form of “Black Guilt” marking the desire for revenge against the oppressor might emerge, or something else altogether, could not be determined in advance. But new forms of deferral for some that are simultaneously remissions or invitations to transgression for others are unsustainable, and for the same reason that one person can’t play chess while the other is playing checkers—a complete confusion of the rules results, ultimately “liberating” everyone from discipline. But without deferral and discipline values can only be derived by reversing the hierarchy between deferral/discipline and authority, so that values descend from the perpetual exposure of hypocrisy, and the liberated are delivered to the competition to commit the transgressions that expose the biggest gap between assertions of deferral and actual appropriation.
The cognitive linguistics from which I have drawn much of my discussion of language learning, joint attention and disciplinarity speaks, like the phenomenological and Gestalt traditions of which it is at least a distant cousin, of “intentionality.” This is also the language of GA. I insist, though, on making “attentionality” the prior, constitutive concept. I certainly don’t deny intentionality, in the sense of “intending” an object and so constituting and conferring meaning upon that object against a relatively undifferentiated background; nor, even, as a locus of interpretive retrieval of the meaning of texts and acts. Intentionality within GA is more strongly conceived, as the object is one of mimetic desire and shared deferral. But intentionality depends upon having one’s attention drawn to the object, and having one’s attention drawn depends upon attending to another who brings the object into view and, finally, upon becoming an object of attention of others (an attention one can’t share). Seeing this broader attentional loop as the condition of possibility of intentionality enables the inquirer to direct attention to that in intentionality that is constitutively excluded and yet constitutive; to put it another way, “attentionality” provides a way of accounting for the alterity in language, that which in my utterances is not “mine” but is, rather, passing through me and carrying me along. The acknowledgment of the alterity of language, which we owe primarily to the various post-structuralist or post-humanist theories, makes it impossible to speak of the generic human (which would presuppose some universally shared world-as-scene and construct alterity as deviance) and imperative to speak of fields of human being: disciplinarity, or the iteration of a particular attentional loop in a way that makes the boundary between intentionality and alterity in language productive. Productive because knowingly composing a version of the human, and one open to the gazes issuing from other versions, through which come other gazings from more distant but maybe, once attended to within the disciplinary space, more intimate, overlapping spaces.
Works Cited
Althusser, Louis, and Etienne Balibar. Reading Capital. Translated by Ben Brewster. London: New Left Books, 1972.
Bowerman, Melissa, and Stephen C. Levinson, eds. Language Acquisition and Conceptual Development. Language, Culture, and Cognition 3. Cambridge, UK; New York: Cambridge University Press, 2001.
Gans, Eric. “Originary Ethics.” The Chronicles of Love and Resentment. No. 431. September 15, 2012. http://anthropoetics.ucla.edu/views/vw431.htm
Gentner, Dedre, and Lera Boroditsky. “Individuation, Relativity and Early Word Learning,.” in In Language Acquisition and Conceptual Development, edited by Melissa Bowerman and Stephen C. Levinson, ed.,214-56. Language, Culture, and Cognition 3. Cambridge, UK; New York: Cambridge University Press, 2001. 214-256.
Hall, Stephen S. “Repairing Bad Memories.” MIT Technology Review, Monday June 17, 2013, http://m.technologyreview.com/featuredstory/515981/repairing-bad-memories/.
Kriss, Sam. “Book of Lamentations.” The New Inquiry. October 18, 2013. http://thenewinquiry.com/essays/book-of-lamentations/
Lawrence, D.H. D.H. Lawrence and Italy: Twilight in Italy. Sea and Sardinia. Etruscan Places. New York:, Penguin Books, 1972.
Lucy, John A. Language Diversity and Thought: A Reformulation of the Linguistic Relativity Hypothesis. Studies in the Social and Cultural Foundations of Language. 12. Cambridge, UK; New York: Cambridge University Press, 1992.
Olson, David R. Psychological Theory and Educational Reform: How School Remakes Mind and Society. Cambridge, UK; New York: Cambridge University Press, 2003.
Polanyi, Michael. The Tacit Dimension. Chicago: University of Chicago Press, 1966.
Rieff, Philip. Charisma: The Gift of Grace and How It Has Been Taken From Us. New York: Vintage, 2008.
Slobin, Dan I. “Form-function Relations: How do Children Find out What They Are?” In Language Acquisition and Conceptual Development, edited by Melissa Bowerman and Stephen C. Levinson, 406-449. Cambridge, UK; New York: Cambridge University Press, 2001.
Sloterdijk, Peter. You Must Change Your Life. Translated by Wieland Hoban. Cambridge, UK; Malden, MA: Polity Press, 2013.
Tomasello, Michael. Constructing a Language: A User-Based Theory of Language Acquisition. Cambridge, MA: Harvard University Press, 2005.
Whorf, Benjamin Lee. Language, Thought and Reality: Selected Writings. Cambridge, MA: The MIT Press, 1956.
Notes
1. For discussions of the Sapir-Whorf hypothesis, the best place to begin is Whorf’s Language, Thought and Reality; for a more recent discussion, see John A. Lucy, Language Diversity and Thought: A Reformulation of the Linguistic Relativity Hypothesis, Cambridge, UK; New York: Cambridge University Press, 1992. (back)
2. Such a study has gotten well under way in Peter Sloterdijk’s You Must Change Your Life, albeit not on the linguistic terms I will propose here. (back)
3. For a very enlightening and enjoyable example of the kind of thing I have in mind, or at least one kind, see Sam Kriss’s “The New Lamentations,” an essay in the online journal The New Inquiry that reads Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition as a dystopian novel: http://thenewinquiry.com/essays/book-of-lamentations/. In this way, it seems to me, a hypothetical origin of psychiatry from which the discipline could be imagined to have deviated is indirectly proposed, and therefore a possible post within the discipline. (back)