Abstract: In this article I argue that our innate prejudice for processing information in hierarchical terms can be traced back to the originary scene. Hierarchical thinking is anthropologically rather than cognitively grounded insofar as it is contained within the structure of deferral present in representation and symbolic thinking. I examine the formation of hierarchical organization and demonstrate that it emerges in response to two heterogeneous exigencies, glossed by Martin Heidegger as two incompatible understandings of transcendence, which can be translated into the Generative Anthropology notion of the pragmatic paradox. Hierarchical order originates in and is a temporal unfoldment of our reaching for the closure of both of these transcendent gaps at the same time. I show that this unfoldment corresponds to the logic of linguistic parsing, scientific explanation, and narrative closure – all already contained in the concept of deferral, a recursive concept that contains future deferrals within itself. Another important connection that this paper makes is between the Artificial Intelligence algorithm of backtracking and the execution of a complex chain of deferrals, which, similarly to backtracking, involves a cognitively opaque process of “thinking from the end.” Following Gaston Bachelard, I call this process consolidation. Thus one way of representing the pragmatic paradox of Generative Anthropology is as a disjunction between forward projective thinking and backward consolidation.
Keywords: Generative Anthropology, Heidegger, Bachelard, transcendence, philosophy of science, explanation theory, hierarchical processing, backtracking, Prolog
The importance and universality of hierarchical thinking to human cognition cannot be overstated. Hierarchical reasoning is mobilized whenever we approach complex cultural constructs, such as language, music, literature, art, and science. But we make use of it even for the most trivial everyday tasks, such as preparing breakfast, which we automatically understand as a sequence of smaller subtasks on a lower level: as brewing a cup of coffee followed by preparing a toast, for example. We use it both actively and passively, applying hierarchical thinking, for example, to purposive behavior, such as problem-solving, planned action, skill acquisition, and memorization, but also in being aware of hierarchies in surrounding phenomena, such as music or language. In this paper, I will make an argument for the anthropological origin of hierarchical structures. The first, longer, part of my paper will be dedicated to clarifying the relevant characteristics of hierarchical thinking and related concepts. In the second part, I will show how these key features naturally follow from an anthropological understanding of deferral, as it emerges on the scene of representation.
The strongest evidence in support of the fundamentally hierarchical nature of thinking is our quest for the unification of knowledge, reflected in the unexamined compulsion to discover a unified theory of basic forces in physics or a unified explanatory model, which amount to the same thing. According to the classical model of explanation in the philosophy of science called the covering law model, scientific laws are sets of generalizations that form deductive systems, which in turn consist of propositions arranged on various levels in such a way that lower-level hypotheses are derived from higher-level hypotheses; moreover, “the logical strength of the hypotheses increases the higher their level” (Braithwaite, “Scientific Explanation” 17). Thus “to explain a law . . . is to incorporate it in an established deductive system in which it is deducible from higher-level laws” (347). Searching for a more powerful explanation is climbing higher and higher on the deductive ladder. Some philosophers of explanation do not question this mental habit while others register it but do not take it further than noting that it brings “intellectual satisfaction” (340): “the subsumption of [a] higher-level law under a still-higher-level law raises the intellectual level of . . . generalization” (346).
The fact that we have an unquestioning expectation of hierarchical structures governing the laws of science and reflected in our intuition of explanatoriness, which grows with the depth of the explanatory tree, is a testament to the fundamental role of hierarchies in human reasoning. But where does the assumption of the hierarchical nature of reality come from? Neuroscientists speculate that it is hardwired in our brains and is located somewhere in the prefrontal cortex, because magnetic resonance imaging shows this to be the area that becomes activated when people are engaged in hierarchical activities. But to view it as merely biological “hardware” or computational “firmware” would be to overlook its relevance to representation and symbolic thinking, if only because the use of language at its most basic functionality of linguistic parsing is explicitly and self-consciously hierarchical. We would be missing an important piece in the larger picture of what it means to be a symbolic thinker on the scene of representation if we did not look at the anthropology of hierarchical structures and, consequently, subject it to originary analysis.
Since in the rest of my paper I will be discussing hierarchies using the example of language, I will narrow my focus and restrict my discussion to self-similar hierarchical structures, of which language is one. Hierarchical self-similarity simply means that the same structure we find on a lower level is repeated on a higher level, such as a clause within a clause, a phrase within a phrase, or a compound noun within a compound noun. Representing a whole structure with one simple rule is an especially powerful and economical method of encoding and retrieving information, since it sharply reduces memory requirements while at the same time, allowing for infinite generative possibilities and flexibility in manipulating symbols.
In order to speculate how the ability to think in hierarchies arose on the originary scene, we need to define what a hierarchical relationship entails with greater specificity. What does it mean for a level of conceptualization to be above another level? This is a complex concept that involves two operations: disjunction and ordering. How do these mutually interconnected characteristics arise on the scene of representation? I believe that the term “subsume” as used in the covering-law theory statement that a “generalization is subsumed under a higher-level law” can give us a clue. Subsume is a word that connotes expansion and generativity, present in its meaning of “incorporation as part of a more comprehensive order.” It goes hand-in-hand with the term “covering” from the “covering law theory,” with both evoking an image of an overturned bowl, a lampshade, or overhanging tree branches, which suggest inclusion and sheltering. But in subsuming there is an additional connotation of nesting, stacking, or building up, such as in a stack of bowls or a collection of Russian dolls, which suggests height or extension along a spatial dimension. Objects connected through the structure of subsumption both envelop and rise, contain and stretch.
In this dual imagery, I detect an echo of what Martin Heidegger stipulates as two types of transcendence that define the human condition, and further down the line, the scenic configuration of Generative Anthropology, as I will show shortly. But first, I will show why these two logically incompatible and heterogeneous types of transcendence both motivate and stymie our impulse to explain and reconcile new information.
In his Metaphysical Foundations of Logic, Heidegger calls the first type of transcendence the epistemological transcendence and relates it to our desire to know the other person’s mind, to bridge the gap between two monadic consciousnesses. He specifies that this meaning of transcendence is opposed to immanence, with immanence referring to what is present to a consciousness, and transcendence, correspondingly, to what is outside it. Metaphorically, it is connected to the ideas of inside vs. outside, to crossing a barrier or leaping over a wall. The main image that captures it is that of a consciousness being locked inside its own box of interiority and imagining a passage that would lead it toward the exterior. This concept of transcendence is the foundation of a theory of knowledge; hence the name, given that the epistemological project sees itself as a crossing over an abyss of ignorance or peeking inside a hitherto closed box.
The other type of transcendence is called the theological transcendence and is paired with contingency as its contrasting term. Contingency, Heidegger explains, is that “what touches us, what pertains to us, that with which we are on the same footing, that which belongs to our kind and sort” (161). Another way of thinking about this is that contingency describes things and phenomena that are available to us or that are within our reach, whether spatially, temporally, or relationally. Consequently, that which is transcendent in this sense would be outside our reach: “what is beyond all this as that which conditions it, as the unconditioned, but at the same time as the really unattainable, what exceeds us” (161). In other words, things that impinge on each other can be imagined to exist in the same space, perhaps in a causal relationship, each conditioned by the one before it, while the transcendent is the first cause that has launched the causal chain into existence but itself exists in a conceptually different domain. It is this relationship between the conditioned (creation) and the unconditioned (creator) that is captured in the name theological.
These two senses of exceeding human limitations are, for Heidegger, fundamental to “the primordial constitution of the subjectivity of a subject” (165). We can, for example, recognize them in the linguistic division of tropes into two classes of figures, metaphor and metonymy. The metonymic axis of the sliding signifier captures the contingency of the conditioned, which can only be grasped from the transcendent position of the unconditioned. Similarly, the metaphoric axis of the linguistic system expresses the forceful act of bringing together two separate semantic domains, and can thus be thought of as a transcendent gesture of creating an artificial path between two enclosed boxes, as pertains to the epistemological transcendence. The same analogy can be extended from metaphor and metonymy to the Derridian system of différance, combining the operation of difference and deferral. If we think of it in these terms, we can say that the epistemological transcendence contemplates the unbridgeable and arbitrary spatial difference between two signs, while the theological transcendence leads one on an ungroundable chase of meaning through linking dictionary definitions.
Both of these aspects are reflected in hierarchical thinking. This becomes apparent if we analyze hierarchy through the concept of the covering law, which is its particularly explicit and self-conscious expression, and specifically the figure of subsumption. I see the latter’s dual orientation as a fitting image of the two transcendent polarities in superposition. Thus the stacking aspect, with its spatial extension of height embodying the vertical climbing of the hierarchical ladder, can be understood as a manifestation of the theological dimension of transcendence. The similarity lies not only in its linear extension but in the fact that each higher level of the hierarchical representation becomes more and more powerful in its generality, applicability, and explanatoriness: “the logical strength of the hypotheses increases the higher their level” (Braithwaite, “Scientific Explanation” 17). The imagined apex of the hierarchical pyramid, often hypothetical and unreachable, such as the goal of constructing a unifying theory in physics, exerts its transcendent pull toward the final destination, but “there is no ultimate end to the hierarchy of scientific explanation, and thus no completely final explanation” (347).
At the same time, the epistemological meaning of transcendence can be detected in the other set of associations evoked by the term subsumption, those of sheltering or containing, implicit in the idea of what it means to incorporate a lawful regularity within a deductive system. Explaining something, that is, making it comprehensible and coherent within the governing paradigm, is an epistemological project. As already said, to explain a law is to figure out a higher level law and a more comprehensive deductive system under which it can be incorporated. It is a movement of breaking through the ceiling of a limiting container in order to go from the inside to the outside. And in a symmetrical fashion, the formal chain of logical deductions moves from the higher-level premises to the lower-level conclusions, from the outside to inside.
I used the term of superposition loosely, thinking of quantum superposition, which depends on incompatible observables, and using it as a mental hook to help us imagine what happens when the two incompatible concepts of transcendence interact with each other. Their mutual impact cannot be additive as we would expect if these two qualities existed in the same conceptual domain. We should expect a transformative effect that creates something radically new, and this is what we have in the structure of hierarchical thinking, which manifests our innate understanding of levels, predicated, as stated above, on disjunction and ordering.
As far as I know, Heidegger never theorized hierarchies. But I start with Heidegger’s gloss on transcendence because of his helpful discriminations and elucidation of conflicting exigencies within human thinking that help me establish a clearer focus on the nature of hierarchies. His analysis of transcendence belongs to the period of writing when he was working on creating a theory of subjectivity, which is not the object of my investigation here. Instead of looking at how transcendence constitutes subjectivity, I am interested in the anthropological underpinnings of hierarchical structures because I believe that both hierarchical thinking and transcendence came into existence on the scene of origin concomitatly with symbolic thinking. We can, in fact, recognize how both senses of transcendence are inherent in GA’s pragmatic paradox, which could be visually represented as an impossibility for a participant on the scene to keep both the desired object and the competitor in his sights at the same time, and psychologically, as two incompatible impulses: being able to extend an unopposed acquisitive gesture toward the central object (the theological sense) and knowing what the competitor is intending (the epistemological sense of transcendence).
We can go further and ask how do these two conflicting desires are converted into a phenomenon as complex as hierarchical thinking. My suggestion is that hierarchy is born on the originary scene when the conversion of the gesture of appropriation into one of designation creates the space of deferral. But before I derive hierarchical reasoning from deferral, I will look more closely at how we process hierarchies, through disjunction and ordering, as groupings that are dependent on other groupings.
To return to the definition of hierarchical processing, “A central feature of purposive behavior is parcellation of the main goal (e.g., preparing breakfast) into smaller subgoals (preparing coffee and buttering toast)” (Farooqui, “Hierarchical Organization…” 17373). Even though the physical tasks themselves are executed “forward,” temporally speaking, they involve backward reasoning. One example of how this works is given by Gaston Bachelard in his Dialectic of Duration. Bachelard is asking what it means to consolidate a process, to see the whole thing as a grouping, meaning a coherent, intelligible entity, and is subsequently showing that hierarchical ordering is the answer: it is what makes a consolidation of experience possible. To demonstrate this, he gives an example of how we execute a planned action, but also says that the same logic will apply to speech or thought, on other words, to a “passive” act of understanding or information processing. What Bachelard calls the “schema of initiating acts” is held together by some kind of ordering, which groups parcellated action into a cohesive continuity from its “successive summit” (77) (a “successive summit” would be a higher-level generalization in relation to lower-level parts, such as “breakfast” in relation to “making coffee and preparing toast”). Bachelard quotes philosopher Eugene Dupréel, who describes the thinking that goes into constructing a shape, let us say a wooden crate:
… [W]henever something is made there are two very clear successive states: first of all, the parts of the object to be constructed are assembled and placed in the order in which they should remain. At this point though, this order is only maintained by external and provisional means. Only in a second, definitive state, will the parts themselves, through an internal adjustment, keep the position in relation to each other that is in the finished object. When for instance a crate is to be made, for a few moments it is the maker’s hands which hold the pieces of wood against each other that he is going to nail. Once these have been hammered in, the crate holds together all by itself; it has gone from the first to the second of the two states to whose succession we have just referred. This is clearer still in the moulding process; the duality of time in this process is marked by the duality of the mould and the object that is moulded. Before the cement is poured in, the object’s parts are already placed in the correct order, but the force manifesting this order is external to them. (82-83)
Bachelard adds: “we thus pass from an ephemeral order to one that lasts, from an entirely external and contingent order to the one that is internal and necessary” (83). And later: the “order has been brought from the outside, going from the whole to the part” (84). What Bachelard describes as the external or ephemeral order, I interpret as the higher hierarchical level, which in my example, is the level that holds the overall conception of a crate in the craftsman’s mind as he performs the successive steps of holding the planks at right angles and nailing them together. In order to create a crate that does not exist yet, the craftsman knows that he has to create a box-like shape without one wall, and then add a wall, and in order to create a box without a wall, he needs to have a box without two walls, and so on… Even though the actual construction proceeds forward, the planning and mental ordering of the action goes back-to-front, informed by the higher level order that is external to the construction process. Keeping in mind an imaginary completed box, we engage in the reverse logic of moving back from the final goal, suggested by the image of the hands holding a still non-existent shape, to each intermediary subgoal until we reach the initial state.
Ralph Holloway in his classical article, “Culture: A Human Domain,” notes a striking homology between tool-making and linguistic parsing, both of which involve a set of rules or techniques governed by a higher-level “idealized plan.” “Tool-makers’ activities . . . are concatenated upon each other . . . and dependent upon the overall plan or strategy involving the unit conceptualization of the final tool or form” (55). In the same way, speaking involves keeping the overall grammatical structure in mind at the root level. In order to understand an utterance as a well-formed grammatical sentence, we forecast that a noun phrase will be followed by a verb phrase, which will itself consist of a verb followed by a noun phrase. A sentence can only be intelligible from the perspective of its future completion. And so even though we parse a sentence forward, stringing words together one by one, we consolidate it backward, completing the ordering of all constitutive parts from a point in the future, where we “project our consciousness.” This forward-backward understanding of order also works across levels, as when a noun phrase is embedded in a noun phrase, and so on; and thus to parse it, we need to “descend” to the deepest level of embedding, while simultaneously consolidate our understanding by backtracking to the top hierarchical level.
Generalizing, we can say that recognizing a pattern or unit of information involves looking ahead in anticipation of its perfect completion, and is thus two-directional, in the sense of relying on a convoluted prospective-retrospective orientation of thought. Here are a couple more examples. One of these is a fractal shape called Koch’s snowflake. To create it, we must follow a set of very simple repeating rules.
Starting with a triangle, 1) divide each line segment into three segments of equal length. 2) draw an equilateral triangle from the middle segment, 3) repeat. After we repeat this sequence three times, we are beginning to see an emerging shape reminiscent of a snowflake. The more times we do it, the more finely detailed the snowflake becomes. The process is infinite, and at some point, the segments become so small, that the line looks perfectly curved. Thus in constructing Koch’s snowflake, we follow a forward-oriented, iterative set of rules of plotting successively smaller equilateral triangles. However, in order to be able to come up with these rules, to have a vision of the overall design, we should be able to conceptualize the final (albeit infinitely detailed) shape and reverse-engineer it, in order to see what the incremental building steps must be. The same is true for hierarchically-stored information, such as the task of mapping out one’s geographical location. A humorous example of this can be found in A Portrait of the Artist as a Young Man by James Joyce. While in grade school, young Stephen Dedalus is fascinated by the strangeness and variety of geographical place names:
He opened the geography to study the lesson; but he could not learn the names of places in America. Still they were all different places that had different names. They were all in different countries and the countries were in continents and the continents were in the world and the world was in the universe.
He turned to the flyleaf of the geography and read what he had written there: himself, his name and where he was.
Class of Elements
Clongowes Wood College
The Universe. (15-16)
The point here is that order to pinpoint the location of the protagonist, we must first go to the end of the chain and map the universe, and only then locate the world within it, find the continents, and so on; otherwise, we have no way of knowing where the class of elements is.
With these two additional examples, I hope to have demonstrated that communicating in and reproducing patterns engages hierarchical thinking. Even if we do not normally think of a process as hierarchical, such as building a crate, it mobilizes a hierarchical structure because an overall pattern, like a ready crate, is taken as an idea that is held in mind, being lifted to a higher level where it “hovers” above the level of its implementation (building a crate, we must keep the image of the crate in our imagination, on a higher level, as it were). Holding an overall pattern on a higher level entails activating the mental processes of projection and consolidation, described above. This movement retains the sense of the theological transcendence in the way the successive states (of constructing a shape, parsing a string, etc…) can be seen as a countdown toward completion. Traversing the entire path to the point of final destination brings intellectual satisfaction and represents the gesture of closing the transcendent gap to reach the unconditioned. In the case of infinite figures like Koch’s snowflake, the gap is never bridged, but we can intuit its completion in infinity.
The epistemological transcendence, as mentioned above, is expressed through the discontinuity of hierarchical structures, their stratification into separate levels. I will briefly focus on levels by discussing the logic of recursion. Recursion is not applicable to all hierarchical structures, but it is relevant to self-similar structures, which I am examining, because the latter are easily and naturally generated by a recursive algorithm. Before I show how deferral can unfold into a hierarchical structure, I will briefly discuss the logic of embedding rules. Theoretically, a hierarchy can be processed through successive iterations and additions of levels (and this is how young children approach it, starting with local information first, while older children develop so-called global bias in that they try to detect and identify the overarching hierarchical structure). But a more efficient rule for processing hierarchical information is recursive embedding, in which an item depends on another item of the same category. For example, we are all familiar with recursion within generative grammar, such as compound nouns or embedded noun phrases or clauses, which can be generated on multiple levels with one rule. Recursion in language implies that a linguistic unit may be embedded within the same type of linguistic unit of a (theoretically) infinite depth. Some examples are: [[[student] film] committee] (compound noun) or [the man [who sold me [the car that wouldn’t start]] no longer works there] (embedded clause and noun phrase).
Just as hierarchies appear to be inherent to human cognition, so does recursive sequencing, which generates self-similar hierarchies, although cognitive scientists acknowledge that “to what extent humans actually extract recursive principles while parsing self-similar structures, remains mostly untested . . . [and], while there have been some attempts to describe recursion as a cognitive module akin to an encapsulated system in the brain, there is no empirical evidence currently either supporting or challenging this view” (Visual Recursion 24). And while there might or might not exist a “hardware” network in the brain that implements recursive rules, it is not where we should go to understand the function and meaning of recursion.
One place we can go is to look at the operation of an AI programming language to see how recursion and backward logic work in concert. Some AI languages use the so-called declarative paradigm, fully or partially. Declarative programming is opposed to the more familiar imperative programming, which is procedural and forward-oriented. Imperative languages control the flow of a program’s execution through a sequence of instructions, such as loops and conditional statements, which force the program toward desired states. In real life, people would give directions or instructions on how to do something using imperative logic: “go over there, then do this, then stop.” In the formulation of a Wikipedia article, “In much the same way that the imperative mood in natural languages expresses commands, an imperative program consists of commands for the computer to perform” (“Imperative Languages”, para. 1).
The declarative programming paradigm, on the other hand, does not force or define the program’s control flow but tells us what it wants done. We can see it with Prolog, an AI language, which is used for “solving problems that can be expressed in the form of objects and their relationships” (PP 1). It is a logic language that is especially well suited to programs that manipulate symbols and abstract concepts, and not necessarily numerical variables or mathematical calculations (although it could be used for these purposes too). Prolog consists mainly of facts, rules, and questions. The declarative part has to do with facts about objects and relationships within the programming universe, which are declared in the beginning of the program. Rules, correspondingly, define and regulate relationships between objects. A program is run by formulating a query and testing whether it can be proven vis-a-vis the known rules and facts. It returns the answer “yes” or “no.” A simple example would be a program consisting of two facts and one rule.
parents (alice, victoria, albert).
parents (edward, victoria, albert).
siblings (Sibling1, Sibling2) :- parents (Sibling1, Mother, Father), parents (Sibling2, Mother, Father).
Our two facts in an arbitrarily chosen form are about a relationship we call parents. This relationship takes three variables: the name of the child, followed by the name of the mother, then the father. The facts that we have declared use the pre-declared constants of edward and alice for the names of the children and victoria and albert for the parents. The last statement is a rule (indicated by the logical inference of “:-“) that defines the relationship called siblings as something that takes two variables. The relationship will be true if each variable can match a parent statement that has the same values for the variables of Mother and Father. Or to put it in natural sentences: “It’s a specific fact in the program’s world that Victoria and Albert are parents to Edward and Alice. In general, being a sibling means that you have the same parents.”
We could then formulate the following query:
?- siblings (alice, edward).
This means, intuitively enough: do values alice and edward exist in the relationship of siblings or are Alice and Edward siblings? To which, Prolog will return with the answer: “yes”.
Suppose, our query was:
?- siblings (alice, alfred).
To the latter, Prolog will return a “no”, which simply means that alfred and alice are not siblings in the world of the program because we have never declared it as a fact.
Prolog is relevant to my argument because of the way it executes searches for answers using backtracking and recursion. Recursion means that the logical sequence of the execution runs backwards, with the program calling itself until the final point, which is the first fact, is reached. The first fact serves as a stopping condition and the top of the hierarchy. Backtracking, in turn, means that the program starts by trying to solve each goal in a query. But when it comes to a goal that cannot be matched, it backtracks to the most recent spot where a choice of matching a particular fact or rule was made. Here is an example of recursion and backtracking in a Prolog program that calculates factorials. The factorial of an integer is the product of all smaller integers up to and including the number itself. Normally, we do not think of factorials as recursive operations: we “count forward” when we do our calculations, as it were. But the Prolog program below shows how the thinking that underlies the understanding and calculation of factorials involves reversed reasoning and how recursion is implicit in the way we calculate this function.
factorial (0, 1).
factorial (N, F) :- N>1, N1 is N-1, factorial (N1, F1), F is F*N1.
?- factorial (3, 6).
?- factorial (4, 15).
In the given formalism, the relationship called factorial takes two variables; the first of these is the number whose factorial we want to know, and the second the value of the factorial. The relationship is defined by one fact and one rule, such that the factorial of 0 is conventionally designated as 1, and the factorial of any integer greater than 1 is the product of this integer and the factorial of the previous integer. The reason this function is recursive is because it refers to itself, embedding a call to itself in the process of calculating intermediate values. The recursion is executed by launching the logic of a countdown, with the factorial of 0 acting as the stopping point. When we ask, for example, what the factorial of 3 is, the program cannot answer immediately; it must defer until it can calculate the factorial of 2, then 1, then 0, at which point it can go no further, 0 being the stopping point. Then it stops and goes back. Going back is called backtracking. As the program backtracks, it performs the calculations for the operations that were previously deferred: calculating the factorial of 1 now that it knows the factorial of 0, then calculating the factorial of 2 now that it knows the factorial of 1, and then finally calculating the factorial of 3 now that it knows the factorial of 2. We can see how the execution is first pushed inward, to the deepest hierarchical level, and then unravels back to the surface level of the query. With each returning step, the sequence of calculations is induced to be performed automatically, as if pulled by a string or activated by a chain reaction of falling dominoes. What is especially pertinent to our previous discussion is the demonstration of how the retrospective movement of consolidation is implemented through the logic of backtracking. By following the order of execution of a Prolog program, we can trace how it simulates our simultaneous prospective-retrospective orientation of thought with a sequential succession of operations that go in both directions. Albeit “unnatural” in its mechanical reversal of direction while alternating between the forward search and backward consolidation processes, Prolog can nonetheless illustrate what is involved in information processing and pattern recognition.
Our ability to use recursion and backtracking is at the root of deferral and must thus be born with the origin of language and representation. I believe that hierarchy comes into existence on the originary scene when the transformation of the gesture of appropriation into one of designation creates the space of deferral. My understanding of GA’s notion of deferral is that it includes both senses of transcendence (deferral/difference) included in Derrida’s différance. I see it not as a future-oriented postponement but as a convoluted, self-reflexive, teleological structure that has the capacity to form hierarchical orders of unlimited complexity. To defer is not to abandon, forget, or get sidetracked, it is to place oneself in an imaginary future position, from which one can evaluate the past. This possibility is planted in the originary event, which is made possible precisely by the fact that sign users can anticipate what is potentially coming; they have acquired foresight about the danger of converging mimetic desires because they can imagine a future eruption of violence. The event is only a potentiality, at this point: it has not yet happened; yet the participants are able to represent it to themselves as a cautionary scenario from a retrospectively-oriented future moment. Consequently, they re-script the story as one of peaceful resolution through the act of deferral.
On the originary scene of representation, deferral is a one-step event: after the moment of “awakening” to symbolic thinking, accompanied by the deferral of the acquisitive gesture, the participants return to the appetitive object in the shared act of feasting. But this simple delay must carry within itself the seed of a complex mental operation that we ascribe to deferral, which functions by reserving a place and promising to come back to it. But it can also defer further, reserving a new place and promising to come back to it, and so on… Deferral is in itself a discrete operation, circumscribed by its limited situational context. In the middle of deferring, a new situation might arise, a new circumstance that necessitates a new scene. I have already mentioned embedded grammatical structures in the context of hierarchies and recursion. What makes it possible to parse these structures, in the first place, however, is our capacity for deferral. For example, we can mentally defer parsing a clause within a very long sentence in order to process an embedded clause within this clause, and then defer yet further to go to another, even deeper, level, but eventually reemerge back and consolidate all deferred parts of meaning into one overarching meaning. Our aesthetic enjoyment of Proust’s long sentences is testimony to this ability.
Deferral, in other words, is infinitely deferrable, and in order to be so, it must have a capacity for deep memory. In computing, the deferrable memory has been implemented though the model of a stack, a type of working memory where intermediate results and pending operations are stored on top of each other and removed last first. You can stack tasks on top of deferred tasks and come back and resume the old job when the current job is finished. In Prolog, we can see this when a program attempts to satisfy a goal. When a variable becomes instantiated, such as Sibling1 to alice in the above example, Prolog puts a place-marker against the stack in order to keep its place while it is trying to instantiate the next variable. When the query is about alice and alfred, as in the second example, the goal of matching alfred to the given facts does not succeed, which forces the execution to go back to the saved place and un-instantiate alice before it can return with the negative answer. Something like a stack then must exist in the brain’s “hardware.” But how would this work conceptually? If storing information and knowing when to come back requires memory, where does memory come in on the originary scene? Does it emerge as a parallel mechanism to deferral to enable the latter to save its place and go back to it? The problem with having a separate memory circuitry, however, is that one needs memory to remind oneself to consult one’s memory. This does present a conundrum; however, we can see, using the example of Prolog, how memory is automatically factored into the process of backtracking on a theoretical level, even though on the level of implementation, something like a stack with place-markers might have a biological analogy in the form of some neural mechanism, which supports our mental capacity for deferral. But conceptually, it is implemented such that the memory of the branching-off locations is automatically restored through backtracking. Finally, if we accept that the premise that parsing a linguistic structure is functionally similar to calculating a factorial, we can see how embedding memory into recursion and backtracking converts the metonymic logic of deferral into the metaphoric logic of hierarchies.
Another reason for looking into declarative languages has to do with their very name. Is there a kinship between declarative programming and declarative sentences, the latter being of great interest to GA scholars? I think there is, at least for the languages based on queries. The goal of a query is to receive a confirmation. The question is formulated in such a way that it is not a calculation that is an explicit topic of the inquiry, but a fact. Does a given fact conform to the world of the program? If so, a “yes” answer is issued, an assurance that confirms the hypothesized state of affairs. Can we declare it as a fact that edward and alice are siblings embedded into the hierarchical structure of facts and laws? Is it true that the factorial of 3 is 6 based on the definition of the factorial? In receiving a “yes,” we can rest assured that our quest for confirmation has been successful. The kind of anxiety that motivates this quest in the first place is very similar to what is going on in declarative sentences.
In The Origin of Language, Eric Gans points out that the declarative is not the primary grammatical form, as it is often treated, but a derivative one, having been derived from the intermediate form of the negative ostensive. The latter emerges as a “possible negative ‘reply’ to the imperative” (91) and designates the absence of the desired object. As long as the object is present at hand on the scene of representation, there is no need for grammatical predication. One participant can simply point at it and issue a command, “Hammer!”, to which the other will reply wordlessly by handing him the requested hammer. But if the object is not immediately present, the other can refer to it ostensively, by naming the name (“hammer”) and adding some operator of negation (“no”), to produce a primitive negative ostensive form that might have sounded something like “hammer – no.” The negative ostensive represents the object’s absence on the worldly scene, where the communication between the participants is taking place. But as the negative ostensive evolved into the declarative, the worldly presence of the object is substituted with its linguistic presence through predication. The declarative gives additional information about the whereabouts of the object in the form of topic-comment: “the hammer is over there.” However truly the hammer “is” over there linguistically, it “is” always “over there” by virtue of being a well-formed declarative sentence. Its presence in an abstract linguistic sense comes into existence with the emergence of the declarative, but this does not mean that it is present in the real world at the present moment for the participants who are waiting for it to be produced. As Gans explains, “The information contained in the declarative acts as a bar to the anticipated fulfillment of the imperative request, and in so doing establishes a barrier between the prolonged linguistic presence within which this fulfillment was awaited and the situation at hand” (101). Deferral as a symbolic phenomenon captures the meaning of the originary scene in the pragmatic paradox of holding a frustrated desire for the object that cannot be produced. Instead of materializing the requested object, the declarative signifies the deferral of its presence and thus “the defeat of desire by reality” (92).
The trauma of the missing object is resolved in a symbolic way through linguistic presence created by predication, but to appreciate it as a cognitive and emotionally fulfilling experience, we need to look at it narratively, because the declarative, while being a derivation of the negative ostensive, is itself the origin of the narrative form, which is its temporal derivation. In his “Originary Narrative” article, Gans writes that the declarative works through the narrative supplementation of the sign, with the sign “telling the story” of its own emergence as a temporal conversion of the gesture of appropriation to the gesture of designation via the hesitation of deferral. What this implies for the evolution of the declarative from the imperative is that “the sign qua declarative “tells the story” of the impossibility of the imperative—of the desired object’s absence from the world of desire.” But the object’s unavailability to physical appropriation is compensated by its symbolic availability, which Gans sees as a minimal narrative performance: “This substitution of an utterance for an object is the originary act of narrative ‘supplementation.’” In narrative terms, we refer to the satisfaction we derive from such supplementation as narrative closure, which is a purely symbolic, intellectual pleasure that can only exist in language and that affords us the gratification of linguistic presence while we cannot have the object itself.
From its emergence on the originary scene as a new non-instinctual behavior to evade violence, deferral evolves into a complex structure that allows the trauma of the object’s absence from the world of desire to be healed by re-inscribing it into the realm of the symbolic. And if we look at the anthropological meaning of hierarchical structures, we can observe that they are mobilized for the same purpose, the same project of assuaging anxiety. Thus if we take the area of scientific explanation, we will recognize that all “why?” questions are underlain by a sense of anxiety, which cannot be dismissed as irrelevant or improper to the factual content because it is motivated by the desire to reaffirm a shared scene without which all scientific investigations would be groundless. Implicit in the “why?” question is a request to have one’s belief in the sacrally anchored and mutually-agreed upon truth claims about the nature of reality corroborated. We can always imagine an unstated yes/no query being tacked onto a “why?” question. Such as, “why didn’t the apple fall to the ground? Has the law of gravity been tampered with?” or “why are the shadows so long? Is it later than I thought? Am I not on Earth?”
What this “why?” yearns for is a reassurance that all is well because a momentarily suspect wayward fact can, after all, be re-inscribed into the relevant hierarchical tree of knowledge. The intellectual satisfaction of being able to subsume a lower-level law under a high hierarchical node, claimed by philosophers of science, has the very same anthropological source, a desire to reaffirm the security and firm grounding of systematized knowledge, behind which lies the desire for the interdicted object on the scene of representation. Deferral takes us on the narrative ride from anxiety to relief, from dissatisfaction to satisfaction, through the peripeteia of confusion and despair, toward the affirmative closure. Yes, we are on Earth. Yes, we are subject to gravity. Yes, Edward and Alice are siblings. Yes, the factorial of 3 is 6. Yes, we’ve got contact. Yes, we are in agreement, we are on the same page, on the same scene, gathered in an ethical stance of joint attention around the same sacred center. The answer is yes; it’s a yes. It is the reassurance of the universal “yes” that Joyce recovers as the longed-for terminus of narrative desire and imposes it as a playfully artificial, formal closure on his Ulysses: “yes I said yes I will Yes” (933). These words conclude an impossibly extended, exuberant monologue of Molly Bloom, who, as the Penelope figure, symbolizes the closure of the classical home-away-home plot. In this type of plot, the gravitational pull toward the moment of homecoming, that is, toward the completion of a task and the final state of rest, serves as an apt expression of deferral’s temporal and emotional trajectory. Heightened by the utterance’s sexual connotations that are encouraged by the monologue it is here resolved by a resounding “yes” to its fulfillment.
Bachelard, Gaston. The Dialectic of Duration. Translated by Mary McAllester Jones, London, New York: Rowman & Littlefield, 2000.
Braithwaite, Richard Bevan. Scientific Explanation: A Study of the Function of Theory, Probability, and Law in Science. Cambridge University Press, 1959.
Clocksin, W.F. and C.S. Mellish. Programming in Prolog. Berlin, Heidelberg, New York, Tokyo: Springer-Verlag, 1984.
Dias Martin, Maurício de Jesus, et al. “How children perceive fractals: Hierarchical self-similarity and cognitive development.” Cognition, vol. 133, Issue 1, October, 2014, pp. 10-24, https://www.sciencedirect.com/science/article/pii/S0010027714000997. Accessed 14 September 2019
—. “Representing visual recursion does not require verbal or motor resources.” Cognitive Psychology, vol. 77, March 15, 2015, pp. 20-41, https://www.sciencedirect.com/science/article/pii/S0010028515000055. Accessed 14 September 2019
Farooqui, Ausaf A, et al. “Hierarchical Organization of Cognition Reflected in Distributed Frontoparietal Activity.” The Journal of Neuroscience, vol. 32, issue 48, Nov 28, 2012, pp. 17373-81, https://www.ncbi.nlm.nih.gov/pubmed/23197728. Accessed 14 September 2019.
Gans, Eric. “Originary Narrative.” Anthropoetics. III.2 (Fall 1997 / Winter 1998). Web. http://anthropoetics.ucla.edu/ap0302/narrative/. Accessed 14 September 2019.
—. Originary Thinking: Elements of Generative Anthropology. Stanford, California: Stanford University Press, 1993. Print.
—. The Origin of Language. New York City: Spuyten Duyvil, 1989, 2019. Print.
Haupt, Grietjie. “Hierarchical thinking: a cognitive tool for guiding coherent decision making in design problem solving.” The International Journal for Technology and Design Education, vol. 28, issue 1, March 2018, pp. 207-237, https://link.springer.com/article/10.1007/s10798-016-9381-0. Accessed 14 September 2019.
Heidegger, Martin. The Metaphysical Foundations of Logic. Trans. Michael Heim. Bloomington: Indiana University Press, 1992.
Holloway, Ralph L. “Culture: A Human Domain.” Current Anthropology, vol. 10, no. 4, part 2, October, 1969, https://cognitivearchaeologyblog.files.wordpress.com/2015/11/culture-a-human-domain.pdf. Accessed 14 September 2019.
Joyce, James. A Portrait of the Artist as a Young Man. Wordsworth Classics, 1992.
—. Ulysses. Penguin Books, 1992.
Lowenthal, Francis, and Laurent Lefebvre, eds. Language and Recursion. Springer-Verlag New York, 2014.
Ludwigs, Marina. ”The Limit of Explanation: Following the ‘Why’ to its Epistemological Terminus.” Anthropoetics X.1 (Spring/Summer 2004). http://anthropoetics.ucla.edu/ap1001/explanation/. Accessed 14 September 2019.
—. ”Three Gaps of Representation / Three Meanings of Transcendence.” Anthropoetics XV.2 (Spring 2010). http://anthropoetics.ucla.edu/ap1001/explanation/. Accessed 14 September 2019.
Nickles, Thomas. “Covering Law Explanation.” Philosophy of Science, vol. 38, no. 4, December 1971, pp. 542-561.
Wikipedia Contributors. “Imperative Programming.” Wikipedia, Wikimedia Foundation, 3 Sept. 2019, en.wikipedia.org/wiki/Imperative_programming. Accessed 8 Sept. 2019.