In perusing several issues of the Digital Humanities Quarterly (DHQ) I soon discovered that the articles it contains, as opposed to the engaging presentations of DH research at the Creating Knowledge symposium described in Chronicle 483, are not addressed to readers concerned with the cultural meaning of literature and the other arts. They are of technical interest to other digital humanists, to whom they provide specialized descriptions of methods and techniques of data analysis, or advice on such activities as curating digital collections. Most of the research described in DHQ seems deliberately to avoid arousing questions of interpretation, let alone seek new perspectives on works or genres likely to be familiar to the reader. Nor is there any attempt to impose criteria of significance on the esthetic phenomena selected for study, which seem rather to be chosen deliberately for their distance from anything resembling canonical status, except when it is simply a question of archiving and cataloguing established texts.
DH in its present state appears to combine various practical but conceptually narrow procedures for archiving, searching, and visualizing various kinds of data with analyses that are more exercises in the application of digital techniques than significant contributions to the study of the arts. In short, DH, before it can prove itself a necessary component of literary and artistic study, is working on establishing itself as a disciplinary community, a essential formation that Bruno Latour among others has studied and Adam Katz has theorized in the GA context. Universities are of course made up of such communities, organized in programs, departments, and faculties, and we can imagine that DH will follow the lead of Comparative Literature, which at UCLA and many other places began as an interdepartmental program, here perhaps less under the aegis of the English Department than elsewhere, and only after several decades became a department. UCLA already has an interdepartmental minor in DH; a major would be the next stage, then departmental status. The latter is not yet on the horizon, but time will tell.
A breakdown of the last four issues of DHQ into categories shows that of 18 articles (you’ve heard of Big Data; this is little data), 11 are focused on techniques, whether of archiving (5), data mining (2), or digital display (“remediation”) (4), and offer no attempt at analysis, although they are all in one way or another creating tools and materials for analysis; two deal with “DH theory” in the self-consciously earnest and jargonesque manner one expects in “new” fields—a more in-depth analysis than this one would engage notably with James Smithies’ ambitious attempt in “Digital Humanities, Postfoundationalism, Postindustrial Culture” (DHQ 8, 1, 2014) to see DH as implying a “post-foundationalist” epistemology, that is, one that brackets ontology and sees only correlations; one reports on the interesting online phenomenon of “long reading”; one describes the novelist J. Coetzee’s early interest in computational stylistics, although without drawing any but the vaguest relationship to his novels, and finally three can be called substantive analyses, one of the language used in the Dutch parliament after WWII that reveals connections between postwar and Nazi-occupation attitudes, and two of artistic works. But even these last are of small interest to humanists: one is the analysis of a novel that might just as easily be placed in the “techniques” section since it focuses far more on its technique of XML markup than on any conclusions about the work, and one compares the vocabulary of ten contemporary female Japanese pop singers.
Yet Franco Moretti’s analyses in Distance Learning, particularly the cybernetically sophisticated graphic analysis of character-clusters in Hamlet, show that when the application of digital methods is motivated by a prior concern with such cultural phenomena as literary form, DH work can bear directly on our understanding of literature, and by extension, other esthetic genres. In Moretti’s study of the titles of English novels, the point was to examine the inaugural and necessarily superficial interface between the novels and their public, and the analysis of the diminishing length of the titles, but also of the kinds of words used in them, is in effect a study of anticipatory literary reception: what kind of title, given the changing circumstances in which a book is offered to its potential public, is most likely to make an 18th-century reader order the book from a bookseller-printer (in which case a long title serves as a kind of prospectus) or a 19th-century one pick it out on the shelf of a lending library or bookstore (in which case a short, provocative title is preferable)? Such material would indeed enrich or complement more traditional literature programs even in the absence of special courses on digital technique.
The simplest way to look at DH is that it applies social-science methods to a new set of data: that extracted from the contents of and reactions to works of art. (Historical data too, but I think we can agree with UCLA that for our purposes history is a social science rather than a humanities department.) And this activity, however marginal its results, is at the very least a tribute to the anthropological centrality of these cultural phenomena. Even at its most superficial, DH cannot content itself with as the French say numerizing arbitrary aspects of artworks. Even the most trivial digital analysis, such as a study of vocabulary or sentence length, has as its intention an improved understanding of the structures of artworks and their effect on their audience. This in turn implies that the esthetic practices thus analyzed are of (anthropological) interest “in themselves,” that they are correlated with generalizable human individual and collective behaviors.
To insist on this may seem to belabor the obvious. But contrast, for example, a typical piece of victimary criticism that reduces a given work to the mythical embodiment of hegemonic dominance, or in exceptional cases, a rebellion against it. The operation of the esthetic in such cases is simply taken for granted as a means for the communication of propaganda. Art, in this critical context, is a “given” medium of communication, like the declarative proposition for metaphysics, except that where the proposition is understood as transparent to, indeed, identical with its “meaning,” art is a non-propositional and therefore “irrational” means of persuasion that can be used for good as well as evil but in either case points to no transcendent, sacred, originary realm. Victimary revelation is in essence an unmasking of esthetic techniques that seek to pass off the stereotypes of “late capitalism” or its earlier equivalents as objective in order to seduce deluded readers to accept the world’s oppressive state as “natural.” From the victimary perspective, the need for criticism, in short, is to separatethe artistic spectator from the spell of the esthetic in order that he/she realize the political message that is in fact being conveyed. (If the message is “correct,” then the reader is permitted to return to the esthetic hookah without remorse).
To generalize from this admittedly small but presumably typical sample of DH activity, most of these authors seem to be more digitalists than humanists, and therefore in humanities terms are shooting rather low. They might want to gradually raise their sights to aim at artworks and genres of real significance. Above all, there should be an awareness in the DH community that even without fussing about which works should enter the “canon,” artworks are simply not of equal significance, and treating them as such blinds the researcher, digital or not, to the profoundly human criteria that makes different works perform different cultural functions, and that makes some works, or families of works, more culturally significant than others.
But the medium is the (postfoundationalist?) message, one might answer: numerical methods operate the same way on masterpieces and on trash, and there is not only no obvious way to use digital methods to make distinctions of quality, but the implicit ethic of DH militates against any such invidious comparisons. There is a quietly subversive side to DH that, rather than deconstructing artworks’ supposedly oppressive political subtext, simply refuses to valorize the works themselves over the graphs and tables that describe them, and ultimately views art and its digital analysis as merely two different modes of data production.
Thus DH might take us from a paranoically obsessive anthropological model of oppressor and victim to a refusal to countenance any anthropological model at all. Which allows a transition to a related subject; what if computers could generate the art as well as the data for its analysis?
A computer generating art creates an output that falls in the category that Andrew Bartlett has defined with the paradoxical expression impossible-human. (Mad Scientist, Impossible Human, Davies, 2014.) A computer-generated painting or poem is impossible-human in the same sense as the speech of Frankenstein’s monster: a human-like output of signs but from a source created by a human rather than organically developing as one.
This wake-up call may serve as an incentive to those who truly take pride in their art to make sure their work cannot be confused with that of a computer, or as has been tried in the visual realm, of a monkey or a little child. And yet how can we be sure that we will continue to remain on top of the heap? The world of chess, with its tournaments and championships, goes on, yet where is the romance when everyone knows that today and presumably forever hence, the world’s best chess players are computer programs? How can we deny that chess is thereby reduced from a kind of minor art to a feat, like competing at multiplying numbers in one’s head, or a sport like weightlifting, where it is irrelevant that an elephant or a forklift can outperform us?
We can accept defeat in a game, even the “royal game,” but in art itself? The visual arts have surely been degraded (although their prices have not) by the proliferation of machine-aided design; Warhol’s art is in essence an ironic rehumanization of the industrial, a bit more creative but of the same ilk as Duchamp’s Fontaine, where placement on the scene of representation, as with so much “conceptual” or “installation” art today, is close to being the whole point. Literature, the most conceptual of the arts, will no doubt be the last to go. Will it? Must it?
I have never paid much attention to those who hope to live “forever,” or who expect cyborgs to take over the universe. But as a subject for discussion in a group where I am a generation or more older than the rest, it does seem useful to speculate about the possible demise of the human as we know it. Andrew’s book, which ends with Blade Runner, a film made in 1982, in the pre-internet, pre-cell-phone, pre-social media era (the year after the appearance of The Origin of Language), still confidently affirms the ontological validity of the originary-scenic human as opposed to the human-created pseudo-human. Now art, like religion, is exclusive to the human scene, and art, unlike religion, demonstrates through our reaction to it the reality of the scene—is so to speak a “proof of the existence of God,” for one who truly understands the originary hypothesis. If a machine can imitate not just the junk art of humans who take their origin for granted or think they can/should deny it, but real art whose creator seeks the immortality of signs that demonstrate their worthiness to remain, then our unique ontological status will no longer be assured.
Let me insist on the positive side of this challenge; machines can easily enough imitate art that pastiches the products of machines, even the mind-machine spewing forth “automatic writing.” But what if we rise to the challenge and produce genuine art unafraid to seek greatness, and still not outdo the cyborgs? This fear is another consequence of the phenomenon to which I referred in Chronicle 484, that of the digitization of human labor. When symbol-manipulation becomes the most significant source of human productivity, then it is by no means clear that the moral model of reciprocal symbolic exchange can be maintained. Given that the symbol manipulations of mathematical models abstract from the originary scenic sacred of humanity, turning sacred-profane into 1-0, and given that some humans are more proficient with these abstract symbols than others as well as in dealing with the machines we have created to manipulate these symbols, thereby trivializing the common linguistic space of human cultural communication (can anyone say this is not occurring?), how can we be assured that these machines cannot be taught to simulate the products in the sign-world of our already degraded experience of sacred transcendence? The material of art, like that of natural science, is wholly in the realm of signs. That the signs that grant us a hint of the sacred have no similar effect on a machine cannot in principle prevent that machine from learning new ways to create that effect. All of human history makes us want to claim that the transcendent space between sacred and profane is unbreachable, but whatever we conceive to be the difference between God and Humanity, there is no obvious ontological difference between a poem written by a human poet and one created by a computer program.
Gloom and doom is one way of describing such speculation, but it also defines a clarifying moment, perhaps at the end of the era of the human… or perhaps not. In any event, even given the inexorability of Moore’s Law, this is a problem that will not yet be faced by my generation. You who are younger will surely have to deal with it in some form (as they say about aging, consider the alternative!)—if only to discover that it is illusory.
Let me leave you with a hopefully consoling idea: Just as chatbots can pass the Turing test by pretending to understand speech, so similar mechanisms can “pretend” to compose poetry and stories. But real art, like real thought, involves genuine cognition, not just its easily reproducible side-effects: manipulation of ideas, in other words, not just of templates that superficially resemble ideas. Thus the impossible-human output must not merely give the outward appearance of thought, but actually embody its intricacy and texture. That would not mean the computer was “thinking,” but that it would be able to elaborate ideas and answer questions about them.
Thus the art we would have it demonstrate would not be the kind of prose/poetry common today, much of which indeed reads as though a computer generated it, but poetry, or prose, that embodies complex thought. Let’s see a computer write poems and stories about… the originary hypothesis! Indeed, this line of thinking suggests that we should analyze more concretely how our esthetic criteria are related to the revelatory model of which the hypothetical originary event provides the minimal model. At least in this manner we will be able to tell whether we are indeed going to keep ahead of our software, or whether the impossible-human is about to become a possibility.
To conclude this series, I hope to put together some speculations on what strikes me as the soft underbelly of natural scientism: the current state of the “digital” understanding that science provides of the universe. My impression is that as physics progresses, its ultimate quest to provide a “theory of everything” gets farther away, and that the effort to explain currently known complexities only leads to the discovery of further complexities. Yet one still hears expressions of crude Laplacian faith that if we just knew the position of all those particles… ergo, “free will” is a myth! I will see if I can flesh out these ideas for a forthcoming Chronicle.