Concerning the question of whether computers can think, the intellectual community seems divided between those who love computers and say they can, and those who hate computers and say they can’t. I wonder how many among those who deny thought to Turing machines share my passion for programming–“hacking,” as it used to be called, but like the dear old Bronx, benign terms sometimes take on sinister connotations.
The joy of computer programming, although surely not the greatest of joys, is unlike any other. It is not the joy of making the computer think, but quite the opposite: that of reducing what for me requires thought to a mindless series of mechanical operations. Every computer procedure, even those of simple arithmetic–assembly language programmers know that arithmetic is not what comes easiest to microprocessors–is essentially a simulation. By reducing one’s thought process to a series of programmable steps, one avoids having to think in the future.
I recently made up my mind, after several years as a disaffected DOS (and former CP/M) programmer, to master the art of Windows programming. A course in Java I took a couple of years ago produced a couple of “applets” but was not the general-purpose language I needed. So I got myself a copy of Visual Basic, sent away for Microsoft’s inevitable “supplementary” manuals, and began acquiring an amateur’s knowledge of the language. Basic is my native programming language, and VB still has most of the old M(icrosoft)Basic constructs. (How many recall that Bill Gates & Co. got its start writing Basic compilers? My old CP/M MBasic manual may be a collector’s item.) For years, the event-driven style of Windows programming had seemed impossible for an old proceduralist to learn. But once the system was up and running and I had converted a couple of my old QBasic programs to VB, I discovered that things were a lot less different than I had imagined. Code was still code, and working with a preestablished GUI (graphical user interface) was a lot easier than trying to roll one’s own. However proud I had been of my primitive assembly-language windowing system in DOS, it was surely a better idea to make use of the infinitely more developed one provided by the Windows API.
To get my feet wet in VB, I decided to write an improved version of the Minesweeper game familiar to all Windows users. MS is the only computer game I can think of that is neither a video game, a card game, nor a two-person game in which the computer simulates another player. MS is a true solitaire, but one that needs the computer’s interactivity (and capacity for random number generation) to make it enjoyable. It is played for speed, but doesn’t require the kind of pre-frontal coordination little kids use in blasting invading Klingons. Choosing which squares to clear is largely a mechanical procedure, but a certain amount of optimizing and strategic thinking is possible. I first implemented the basic program, then added a few extras, like an autoclear feature that relieves the player of the tedium of clearing around the mines he has located. But what tempted me above all was a strategy module. Every MS player learns a few basic configurations in which one knows exactly where the mines are and are not. If I could formalize the basic MS strategy, I would discover new and more complex configurations.
After conceptualizing, programming, and (the longest stage) debugging, I created a functional strategy module; given a Minesweeper position, it will find all the safe moves. (I’m still working on an “endgame” procedure to evaluate positions where the choice of moves depends on how many mines are left on the board.) This accomplishment, respectable for an amateur, is nowhere near the Deep Blue level, nor anything like what’s normally called “artificial intelligence” (AI). Nevertheless, it gives me a little supplementary insight into the question of whether computers can “think” or, to put it more precisely, whether human thinking is qualitatively different from the Turing-machine operations of computers.
One poses the question badly when one seeks to define thinking as either a mechanical operation or a “spiritual” one. At least the latter choice shows some respect for the specificity of the human, but it is inevitably expressed in a way that makes its partisans into mirrors of their materialist adversaries. The supernatural is only an unverifiable variety of the natural; to found human difference on it is in effect to deny its empirical reality.
The mechanist school of thought relies on complexity to differentiate between the thinking of a computer program and that of a human being. Of course a minesweeper strategy is just a small series of calculations, but–so the reasoning goes–the only difference between it and “real” thought is that the latter is more complex. Our brains have billions of neurons and synapses; they’re just bigger and better Turing machines.
I don’t purport to know any better than the mechanists how the brain works; I don’t think that’s the point. Nor am I tempted by the kind of category errors that lead a distinguished mathematician like Roger Penrose in Shadows of the Mind : A Search for the Missing Science of Consciousness (Oxford, 1994) to seek in quantum theory the explanation of the difference between human and computer thought. If this kind of thinking ever stopped to reflect on what it itself is doing, it would gain far greater insight into what human thinking really is than that revealed by its theories, just as even the worst human thinking tells us more in its very badness about human thought than the most sophisticated computer program.
The crux is not logical but anthropological. It depends on whether one cares about the human or not. Have we reached the point in our history when what really matters is the creation of better Turing machines? One day, sci-fi writers speculate, we’ll build computers so smart they’ll take over. And we will have been right to do so even if it means the end of the human species, for these computers are truly a superior life-form. As mere “flesh-puppets” of our “selfish genes,” what higher destiny could we aspire to than to sacrifice ourselves to the creation of a superior form of beings. From a mere means by which one gene produces another gene, we become the designers and builders of our successor race…
Perhaps our descendants in the year 3000, if humanity survives that long, will have some reason to think this way. But if one thing is certain, it’s that we don’t. We have to do anthropology because the greatest danger to our survival, in the atomic era even more than on the occasion of our originary discovery of language, comes from our fellow humans. Denigrating ourselves, as we are wont to do, in comparison with animals or trees, is at best an infantile apotropaic gesture. Denigrating ourselves in comparison with computers is another. My Minesweeper program finds moves that I don’t, Deep Blue found a few that Kasparov didn’t; if this be thinking, then without a doubt, computers do it better.
The adepts of AI concede that Minesweeper and chess are restricted domains not typical of “life,” but argue that computers are merely in their infancy. The quantity of data necessary to understand, say, a newspaper article is several orders of magnitude greater than that required to play a game of chess, but the capacity to record and manipulate data has been increasing exponentially, doubling every 18 months. How can we assume that there are things that we can know or manipulations we can exercise on the data of our knowledge that are beyond the ultimate capacity of an appropriately equipped Turing machine?
The point isn’t to show that we are “smarter” than computers, that we can perform formal intellectual manipulations of which they are incapable, but simply that thinking is irreducible to formal intellectual manipulations. This does not mean that it should be explained by divine inspiration, although the rapprochement between thinking and the sacred is worth keeping in mind. Thinking is irreducible to the manipulation of formal signs because it involves the creation of formal signs. To put it in the terms of The Origin of Language, thinking has an ostensive component. Platonic metaphysics pretends to reduce all discourse to the logic of declaratives, to eliminate the ostensive and the “poets” whose work explicitly depends on it, but as I tried to show in “Plato and The Birth of Conceptual Thought” (Anthropoetics II, 2), the central metaphysical concept of the Good and, by extension, every concept, functions not as a mere marker for reality but as itself a “thing” that we possess in common, with each of us in possession of the whole. The miracle of the concept, like that of the loaves and fishes, is its embodiment of the trace of the originary scene, where the shared ostensive sign defers mimetic violence. The sign does not create the physical thing, but it creates the meaningful thing that can be represented by a sign. It is this “transcendental” function of the sign that cannot be programmed in a computer, which by definition manipulates symbols alone.
What then precisely is thinking, and why do the activities of Turing machines appear to come so uncomfortably close to it? I may appear to have surrendered to the cyberneticists in defining thinking as the attempt to reduce the entropy of a given situation by creating a model with an optimally small number of parameters that can be manipulated “mechanically”–programmed, if you will–with the final goal of transforming an anxiety-ridden decision into an algorithm. What I do and what a computer does when we try to think of a chess move are essentially similar. We both evaluate the forces on the board and their possible configurations after a certain number of possible moves with the final intention of choosing which move to make. But what differentiates chess from real life is not simply complexity. Chess is a game, that is, an activity governed entirely by man-made rules. Such activities exist only in the human sphere; they derive directly from the ritual imperative to defer violence by representation. Following strict rules is a simple way of avoiding conflict. We have all seen players who get upset and knock all the pieces off the board. These do not “break the rules” but leave the domain of rules; it is thus that “real life” contrasts with games.
No doubt the natural and even the human world have their regularities; thinking processes are for the most part programmable. Empirical science seeks such regularities or “laws” in its data, and computers can be and are programmed to do the same. But the reduction of chaos to order is not merely the principle of games like chess, it is the deepest principle of human interaction. The originary invention of the sign was not the discovery of a regularity in empirical data, but an “act of faith” in the efficacy of subordinating our mimetic rivalry to a Being beyond ourselves. The act of faith is a feature of all thinking, not only of that effected in the originary scene and in the scenes of revelation on which the great religions are founded. My humble Minesweeper strategy may be purely mechanical, but its inspiration is spiritual; the desire to conquer nature’s chaos through intellect reflects our more fundamental need to defer the dangers of our own. No creature that does not embody the potential violence of mimetic desire would either invent a game like Minesweeper or attempt to devise a strategy to conquer it.
At the end of a recent Chronicle I tossed off a line to the effect that only when computers began to feel resentment would I begin to worry about their approximating human thought. One member of the GAlist obligingly replied that he had just programmed a computer to simulate resentment. Computers can indeed be programmed to simulate anything. But the only computer simulation that would allow us to understand human thought would be one that could be made to generate, starting with the invention/discovery of representation, the historical evolution of human culture. Even then, of course, the computer would not be “thinking” in the sense that we are, but just as in chess when computers begin to win against grandmasters, at that point we could begin to say that human thinking had met its master and was no longer necessary.