In Terrence’s Self-Tormentor the old man Chremes proclaims, “I am a human being. I consider nothing human alien to me” (homo sum, humani nil a me alienum puto) – a proclamation of magnanimity that lept out of this 2nd-century B.C. play and took on a proud, expansive life of its own. But alongside the humanistic magnanimity runs a disturbing question – the question of this Forum. Despite all the millennia during which we have been humanizing the world, “that uneasy stare at an alien nature is still haunting us, and the problem of surmounting it is still with us”, as Northrop Frye says in The Educated Imagination (1963: 22). So, reflecting the confrontation back onto ourselves we ask, what is this “human”, beyond whose verge lies something Other?
Ask that, it seems, and tribal, territorial, oppositional metaphors almost immediately come into play: human here, non-human there, with a nervously disputed boundary in between and turf to be won or lost. The same drama enacted among nations and races plays out in the academic world as well, with the less self-confident areas of enquiry put on the defensive. It would be good to know if, as seems, the humanities have found themselves more often than not in that position, defining themselves in terms of what they are not. Is this the humanities’ lot by virtue of their common trajectory toward “the alternativeness of human possibility” (Bruner 1986: 53) rather than utilitarian ends? In the Renaissance, Peter Burke points out, the scholar-revolutionaries nicknamed the humanistae brought the concerns belonging to man (literae humaniores) into focus by distinguishing them from and opposing them to the concerns belonging to God (literae divinae).
Since the mid to late 19th Century, especially in North America, the sciences have taken religion’s place as the privileged force to be reckoned with, in the popular imagination the gold-standard for reliable knowledge and source of public benefit.
The situation between beneficial sciences and problematic humanities continues to be exacerbated by those spatial metaphors that not only oppose ways of thinking but confine thought to a limited number of possibilities, like the Tree of Knowledge with its well-grown branches of learning or the geo-political images of turf and territory. My favourite but by no means unrepresentative example of the latter is from a lecture given by the Göttingen mathematician David Hilbert in neutral Zürich in 1917, as the empires of Europe were destroying each other all around him; apparently without irony he spoke of relations between his field and “the great empires of physics and epistemology” (1996/1918: 1107). We might name the superpowers of academia differently now, but Hilbert’s manner of characterizing them is instantly recognizable.
My concern, however, is not with epistemic warfare or diplomacy but with the deeply engrained way of understanding what one is or where one stands by opposition to an alien nature or place. Let me give substance to this concern by focusing on the specific opposition of Self to Other in computing. I want to do this not in terms of the nervousness, indeed outright fear that rippled through the humanities in the early days of computing – a fascinating historical question that I’m working on at the moment. Nor do I want to take up the question of the human as usually done in the philosophical neighbourhood of AI. The classic statements one finds there – Alan Turing’s imitation game and many subsequent positions for or against a perfect counterfeit (best argued by Hugh Kenner in The Counterfeiters) – seem to me to need considerable revision in the light of the machines we now have. For involvement with computing means not just better ways of doing certain old things and new ways of doing new things; it also hooks humanistic concerns to technological progress and so to an unstoppable force for change, continually modifying the problem to be considered. Unlike the situation Turing had in mind in 1950, our physical machines belong to us; they are intimately present, fast and (I am tempted to say) nearly resonant with us in the cybernetic sense. What’s now Other is not some hulking, room-filling, air-conditioned mainframe behind a partition, with labcoated technicians and engineers attending it, rather an indefinitely malleable scheme vested in increasingly accessible form (despite all the frustrations), by which we humanists are modelling whatever we care about. The Other is no longer plausibly out there but tangibly in here. How many of us now feel unwell when our computers don’t work? I admit to being one such person.
It is true that early hackers knew their machines in this sense, indeed that before them the first architects of programming, such as Herman Goldstine and John von Neumann, understood that instructing the machine “is not a static process of translation, but rather the technique of providing a dynamic background to control the automatic evolution of a meaning” (1947: 2). Here, however, I am concerned with computing as a cultural phenomenon, something that was perceived to be massively out there but is now scholarship’s familiar.
I want to ask questions from the inside of that relationship, in the spirit of Warren McCulloch’s “experimental epistemology” (1960), though certainly not as a neurophysiologist. My experience, as happens, is with works of literature – though any other artefacts of interest in the interpretative disciplines would do as well. Unlike most humanists involved with computing, my concerns nowadays are less with particular artefacts than with what tends to happen when computing becomes part of the interpretative act (which is among the most intimately human things we do). How does the question of the human look from there?
Let me take advantage of George Miller’s article “What is information measurement?” (1953), where he remarks on the contribution information theory might make to experimental psychology:
In the first blush of enthusiasm for this new toy it is easy to overstate the case. When Newton’s mechanics was flowering, the claim was made that animals are nothing but machines, similar to but more complicated than a good clock. Later, during the development of thermodynamics, it was claimed that animals are nothing but complicated heat engines. With the development of information theory we can expect to hear that animals are nothing but communication systems. If we profit from history, we can mistrust the “nothing but” in this claim. But we will also remember that anatomists learned from mechanics and physiologists profited by thermodynamics. Insofar as living organisms perform the functions of a communication system, they must obey the laws that govern all such systems. How much psychology will profit from this obedience remains for the future to show. (p. 3)
Indulge me for a moment in a bit of philology. Note the word “obedience”. In its sense closest to the human this denotes (1) an act of the will, a submission, as when I am obedient to the wishes of an equal or near-equal; then, (2) a yielding to some force or agency stronger than myself, as I would to someone with a massively persuasive argument or coercive weapon; then, (3) simply a manifestation of a force or agency so strong and in control that the very idea of resistance is nonsensical (as it is to say that a tightrope walker “defies” gravity).
In what sense is the human interpreter of a work of literature obedient, and how does the best of techno-science’s most influential invention, computing, compel his or her obedience now, or seems likely to in the future? Like the “nothing but” psychologist, he or she may be doctrinally obedient [1,2] to a school of interpretation, but what about obedience ? If we suppose that the interpreter of a text uses a computer persuasively to discover statistically significant regularities that go against some readings but favour others, what then? (Such uses have been impressively successful for some time.) A close look at the relevant research shows, however, that analysis proceeds recursively, in a virtuous, hermeneutic circle in which interpreter and statistical model interact. So intimately resonant are the statistical tools and the scholarly interpreter – indeed, in some cases the interpreter has developed these tools gradually to suit the developing results – that it makes no sense to distinguish “the dancer from the dance” (Yeats, “Among school children”). It would seem, then, that the challenge for the digital humanities is to figure out how more effectively to move in that cybernetic direction, using tools as some say we always have, to leverage the evolutionary processes in our own development.
So is there then no troubling question of the human for the humanities? Is the talk of a “theft of humanity” scare-mongering, or only a theft so long as the disciplines fight over infinite treasures as if they were finite things? Budgets and institutional plans are painfully finite, so there’s one problem. Another is what Langdon Winner in “Technologies as Forms of Life” (1986) calls our “technological somnambulism”, our proceeding as if we were not being culturally remade from the inside. Another is that we really have no idea even how to talk about the challenge with which computing confronts the humanities because we lack the vocabulary with which to bridge critical theory to technological methods. But let me focus attention rather on the question Peter Galison raises at the end of his article on the inheritance of cybernetics, “Ontology of the Enemy” (1994), and again in Image and Logic (1996): as objects and techniques move across cultural boundaries and through time, how goes what he calls their “(incomplete) disencumberance of meaning”? (1996: 435f). After considering the historical origins of Wiener’s cybernetics, Galison concludes,
Cultural meaning is neither aleatory nor eternal. We are not free by fiat alone to dismiss the chain of associations that was forged over decades in the laboratory, on the battlefield, in the social sciences, and in the philosophy of cybernetics. At the same time, it would clearly be erroneous to view cybernetics as a logically impelled set of beliefs…. What we do have to acknowledge is the power of a half-century in which these and other associations have been reinstantiated at every turn, in which opposition is seen to lie at the core of every human contact with the outside world. (1994: 265)
In conclusion let me ask: where, then, do we stand with respect to the computational Other, a human invention with a past as checkered and complicit as cybernetics, for so long viewed as inhumanely rigorous, provoking the fear of absolute enslavement to the cold machine, or itself put (like some of our fellow creatures) so far beyond the human pale as to provoke the thought of conscience-free slavery to support a prosperous leisure for humankind? (In 1971, in a review in the Times Literary Supplement, Sir Geoffrey Vickers noted this temptation as the greatest impediment to realizing computing’s signal contribution to epistemology.) Is this Other (which we have made), as Bruno Schulz wrote about art in 1935, something which connects us to a premoral and precognitive depth at which human values and thoughts are still “in statu nascendi“? Is this Other us? Should we be scared, or welcoming, or what?
Bruner, Jerome. 1986. “Possible Castles”. In Actual Minds, Possible Worlds. 44-54. Cambridge MA: Harvard University Press.
Burke, Peter. 2000. A Social History of Knowledge from Gutenberg to Diderot. London: Polity.
Frye, Northrop. 1963. The Educated Imagination. The Massey Lectures, Second Series. Toronto: Canadian Broadcasting Corporation.
Galison, Peter. 1994. “The Ontology of the Enemy: Norbert Wiener and the Cybernetic Vision”. Critical Inquiry 21.1: 228-66.
—. 1997. Image and Logic: A Material Culture of Microphysics. Chicago: University of Chicago Press.
Goldstine, Herman H. and von Neumann, John. 1947. Planning and Coding of Problems for an Electronic Computing Instrument. Part II, Volume I of Report on the Mathematical and Logical aspects of an Electronic Computing Instrument. Princeton NJ: Institute for Advanced Study. library.ias.edu/hs/digiarchives.php (26/4/09).
Hilbert, David. 1996/1918. “Axiomatic Thought”. In From Kant to Hilbert: A Sourcebook in the Foundations of Mathematics, vol. 2, ed. William Ewald. Oxford Science Publications. Oxford: Clarendon Press.
Kenner, Hugh. 2005/1968. The Counterfeiters: An Historical Comedy. Normal IL: Dalkey Archive Press.
McCulloch, Warren S. 1960. “What is a Number, that a Man May Know It, and a Man, that He May Know a Number?” Alfred Korzybski Memorial Lecture. General Semantics Bulletin 26 & 27: 7-18. www.generalsemantics.org/misc/akml/akmls/26-27-mcculloch.pdf (26/4/09).
Miller, George A. 1953. “What is information measurement?” American Psychologist 8: 3-11.
Schulz, Bruno. 1998/1935. “An Essay for S. I. Witkiewicz”. In The Collected Works of Bruno Schulz. Ed. Jerzy Ficowski. 367-70. London: Picador.
Turing, A. M. 1950. “Computing Machinery and Intelligence”. Mind N.S. 59.236: 433-60.
Winner, Langdon. 1986. “Technologies as Forms of Life”. In The Whale and the Reactor. Chicago: University of Chicago Press.
Vickers, Geoffrey. 1971. “Keepers of rules versus players of roles”. Rev. of The Impact of Computers on Organizations, by Thomas L. Whistler; The Computerized Society, by James Martin and Adrian R. D. Norman. Times Literary Supplement 21.5.71: 585.