Intentionality is characteristic of “systems” like ourselves that have intentions, which we may define as internal models of future states. Intentionality is complicated by the way in which intentional systems deal with other intentional systems. If I tell you, “Please open the window,” at first glance my order merely seeks to bring about a preferred state: having the window open. But this would be a complete analysis only for an order to a robot. Speaking to another person requires that I operate at a second intentional level: by making my request, I intend that you act intentionally to open the window. Yet even this analysis does not suffice. I intend not merely that you do so but that you do so specifically in response to my request, that is, I intend that you know that I wish that you act so as to open the window. Thus a typical speech act involves four levels, or two double levels, of intentionality.

There is a problem with this kind of reasoning, however: it leads to infinite regress. This would not be possible if I had to explicitly think out each level, which would rapidly become impossible. But such explicitness is not necessary. After all, when I ask you to open the window, I am not thinking explicitly in four intentional layers; they form the implicit structure of my speech act. But since you implicitly know this as well as I do, your own understanding of my request may be said to add another layer: you know that I intend that you know that I wish… And since I know this as well as you do, I may be said to know that you know that I intend… Since this “knowledge” has no obvious effect on the speech situation itself, the number of levels is for all practical purposes indeterminate. This poses a problem for those who would use the number of intentional levels as a measure of “Machiavellian” intelligence, as is currently the trend in primate studies. (See Raymond Swing’s “Comments on GA” in Anthropoetics V, 2.)

Behaviorists had such fears long ago. Hence they rejected “mentalism,” grounded on unverifiable subjective constructions, preferring instead to understand animal behavior, including our own, “behavioristically,” that is, at the zero level of intentionality. In this view, my speech act may be explained simply as a result of operant conditioning: I have learned that when I am hot and there is another person (and a closed window) present I can gain relief by saying “Please open the window.” However few or many degrees of intentionality I may lay claim to, behaviorist parsimony demanded they be ignored.

Today the pendulum has swung the other way, and scientists consider themselves obliged, and able, to deal with “intentional systems.” For behaviorism and mentalism are not mutually exclusive. As the operations of the brain become clearer, the mental increasingly becomes just another category of behavior. Suppose what we call “having an intention” can be understood as the XYZ configuration of such and such complexes of neurons. At this point, the behaviorist can simply replace the mentalist notion of “intention” with the scientific term XYZ. Instead of “I intend to bring about situation S,” we would say something like “Gans’s brain is in XYZ configuration with content S.” To be complete, we would have to convert to behavioral terms the notion of “implicit intentional level” that seems indispensable to the analysis of human speech acts. The XYZ configuration would have to include, not necessarily at the level of conscious awareness, a hierarchy of intentional representations. Thus empirical research might conceivably confirm the four-level analysis given above–or, of course, it might do just the opposite.

Clearly we want to distinguish between those organisms that act “instinctively,” those that can formulate intentions, those that can recognize such formulations in others, those that can recognize that others can recognize their own such formulations, and so on. One way to do this is to seek evidence of deception. Deception is the commonest indicator of intentional level because it provides explicit proof that I have a “theory of mind”–that I am concerned with your mental state–proof that is rarely obtainable when I merely communicate what is true. But even assuming that the accounts of deception among higher primates in the literature are accurate, as Daniel Dennett laments after his experience with vervet monkeys, there is no clear way to assign intentional levels to them (see Richard Byrne & Andrew Whiten, Machiavellian Intelligence, Oxford, 1988, ch. 14). It is clear enough that chimpanzees are more skilled than lower animals at manipulating others and predicting their reactions; what is not clear is how to correlate this intellectual superiority with intentional level.

The human use of language, on the contrary, implies reciprocal recognition of intentionality. I express my intention to you in the context of your understanding not only the intention itself but its intentional status. The question of additional intentional levels arises because language allows us to create and test sentences like “I know that you know that I know that…” But language cannot have originated as a metaphysical parlor game. Can the originary hypothesis clarify the matter of intentional level?


Let us begin, as language itself must have begun, with the ostensive. When my gesture of appropriation becomes a re-presentation of its object, it expresses my intention that my interlocutor recognize my intention not to appropriate the object. Thus the originary sign does not merely have a referent, but having it as a referent is understood as opposed to having an appetitive intention to appropriate it. The metaphysical notion of intentionality ignores this distinction between the intention to appropriate and the intention not to appropriate that for GA defines human language. This does not mean that I cannot use language to announce my intention to appropriate an object. But when I do so, my language is not a pointing-to-what-I-want, but a reference to an object in the first place independent of my desire, toward which I then contingently express this desire. I use the same representational means to request the object that I originally employed to renounce it.

This implies that the theory of mind I attribute to another human who requests, say, a banana, is qualitatively different from the theory of mind by which chimp A attributes to chimp B a desire for a banana (which A may thereupon hide, pretend to be unaware of, etc.). The fourth-level intentionality implicit in any use of language is not an automatic consequence of the substitution of a sign for a referent. What distinguishes language from simple substitution is precisely its mediation through an originary human collectivity. My telling you X implies I want you to know I’m telling you because language was from the origin a way of informing one’s interlocutor not simply of the presence of the sacred object but of the speaker’s intentional relationship to it. While pointing to the center, I inform my fellow participants that I will not appropriate it and that I want them to know that.

We cannot explain the limitation to the human species of what Thomas Suddendorf (Michael Corballis & Stephen Lea, The Descent of Mind, Oxford 1999, ch. 12) calls metamind, the ability to consider representations as representations, simply as a result of our “greater intelligence.” None of the current social-science explanations of the human mind fully account for representation as a cultural rather than a biological phenomenon. As we saw in Chronicle 195, Durkheim understood that the sacred cannot be derived from the natural without the mediation of the social; in other words, the cultural cannot be derived from the biological without the mediation of the communal. Representations, whether linguistic or ritual-esthetic, are not merely artifacts of “intelligence” but products of a new level of interaction that could only have emerged in a collective context. This explains why our ability to construct recursive chains such as “I know that you know that I know…” is irrelevant to the intentional level implicit in language.

Suppose I come out of a movie theater and tell you, “I liked the film.” As we have seen, I am telling you (1) I liked the film, and (2) I want you to know/think I liked the film. But (2), as well as (1), is a possible locus of deception: I may want you to think something that is not true. Because you know this as well as I, you may interpret my statement as a lie. And since I know this, I may be perpetrating a third-level deception: (3) I want you to think that I want you to think I liked the film. That is, that you will think: “he wants me to think he liked the film (but he really didn’t)” when I really did like the film.

The same analysis leads us to level 4. Suppose I want you to think that I want you to think that I want you to think I liked the film, that is, that you think, “he wants me to think he wants me to think ‘he liked the film (but he really didn’t)’ when he did like the film” when I really didn’t.  But because you can anticipate this strategy as well, I may anticipate this anticipation and perpetrate a fifth-level deception. This analysis leads to the conclusion that there is no end in principle to intentional levels, even if they reach a practical cut-off point.

But this conclusion is not justified. Beyond level 3, the analysis of layers of deceit is purely academic; it corresponds to no concrete behavior. I can either say “I liked the film” in such a way as to sound sincere, or in such a way as to appear to be insincere. But the way in which you interpret my words, the number of layers of deceit you think me capable of, is not a function of the speech-situation itself. You can only choose between believing that I’m trying to make you believe A or believing that I’m trying to make you believe not-A. The situation is homologous to that in the game of morra or “choosing,” where each of two adversaries takes “odd” or “even” and then extends either one or two fingers, the total of which determines the victor. In principle I will extend the number of fingers that I anticipate you will not expect. You will anticipate that I will anticipate this, and so on. One usually plays two out of three. Let us say I played “1” last time; the second time, you may expect me to change to “2,” so I play “1” again. But you may anticipate this move as well, so I play “2.” But… However long the analysis goes on, there are still the same two possibilities, and it is absurd to continue to attribute to the players ever higher levels of intentionality, let alone of intelligence.

Similarly, the number of “moves” we may wish to count in the deception game is independent of the communication structure itself. This does not prevent us from constructing representational models with indefinite numbers of layers, where A thinks (knows, wants…) that B thinks that C thinks that D thinks that E … thinks X. But such chains are merely formal constructions that fail to correspond to any specific human behavior. I would stress that I do not attribute this failure to the limits of our mental capacity–limits which, incidentally, are not shared by the computers we use to help us think–but to the fact that human language as originarily constituted operates on two doubled levels of intentionality. A tells B about C, but in such a way that B is made aware that A is sharing the sign for C with him. B’s attention is drawn to C, but he is simultaneously connected socially with A. This is explained by the originary hypothesis, but not by theories that see language merely as a formal system that substitutes signs for things.