Who am I computing?

In Terrence’s Self-Tormentor the old man Chremes proclaims, “I am a human being. I consider nothing human alien to me” (homo sum, humani nil a me alienum puto) – a proclamation of magnanimity that lept out of this 2nd-century B.C. play and took on a proud, expansive life of its own. But alongside the humanistic magnanimity runs a disturbing question – the question of this Forum. Despite all the millennia during which we have been humanizing the world, “that uneasy stare at an alien nature is still haunting us, and the problem of surmounting it is still with us”, as Northrop Frye says in The Educated Imagination (1963: 22). So, reflecting the confrontation back onto ourselves we ask, what is this “human”, beyond whose verge lies something Other?

Ask that, it seems, and tribal, territorial, oppositional metaphors almost immediately come into play: human here, non-human there, with a nervously disputed boundary in between and turf to be won or lost. The same drama enacted among nations and races plays out in the academic world as well, with the less self-confident areas of enquiry put on the defensive. It would be good to know if, as seems, the humanities have found themselves more often than not in that position, defining themselves in terms of what they are not. Is this the humanities’ lot by virtue of their common trajectory toward “the alternativeness of human possibility” (Bruner 1986: 53) rather than utilitarian ends? In the Renaissance, Peter Burke points out, the scholar-revolutionaries nicknamed the humanistae brought the concerns belonging to man (literae humaniores) into focus by distinguishing them from and opposing them to the concerns belonging to God (literae divinae).

Since the mid to late 19th Century, especially in North America, the sciences have taken religion’s place as the privileged force to be reckoned with, in the popular imagination the gold-standard for reliable knowledge and source of public benefit.

The situation between beneficial sciences and problematic humanities continues to be exacerbated by those spatial metaphors that not only oppose ways of thinking but confine thought to a limited number of possibilities, like the Tree of Knowledge with its well-grown branches of learning or the geo-political images of turf and territory. My favourite but by no means unrepresentative example of the latter is from a lecture given by the Göttingen mathematician David Hilbert in neutral Zürich in 1917, as the empires of Europe were destroying each other all around him; apparently without irony he spoke of relations between his field and “the great empires of physics and epistemology” (1996/1918: 1107). We might name the superpowers of academia differently now, but Hilbert’s manner of characterizing them is instantly recognizable.

My concern, however, is not with epistemic warfare or diplomacy but with the deeply engrained way of understanding what one is or where one stands by opposition to an alien nature or place. Let me give substance to this concern by focusing on the specific opposition of Self to Other in computing. I want to do this not in terms of the nervousness, indeed outright fear that rippled through the humanities in the early days of computing – a fascinating historical question that I’m working on at the moment. Nor do I want to take up the question of the human as usually done in the philosophical neighbourhood of AI. The classic statements one finds there – Alan Turing’s imitation game and many subsequent positions for or against a perfect counterfeit (best argued by Hugh Kenner in The Counterfeiters) – seem to me to need considerable revision in the light of the machines we now have. For involvement with computing means not just better ways of doing certain old things and new ways of doing new things; it also hooks humanistic concerns to technological progress and so to an unstoppable force for change, continually modifying the problem to be considered. Unlike the situation Turing had in mind in 1950, our physical machines belong to us; they are intimately present, fast and (I am tempted to say) nearly resonant with us in the cybernetic sense. What’s now Other is not some hulking, room-filling, air-conditioned mainframe behind a partition, with labcoated technicians and engineers attending it, rather an indefinitely malleable scheme vested in increasingly accessible form (despite all the frustrations), by which we humanists are modelling whatever we care about. The Other is no longer plausibly out there but tangibly in here. How many of us now feel unwell when our computers don’t work? I admit to being one such person.

It is true that early hackers knew their machines in this sense, indeed that before them the first architects of programming, such as Herman Goldstine and John von Neumann, understood that instructing the machine “is not a static process of translation, but rather the technique of providing a dynamic background to control the automatic evolution of a meaning” (1947: 2). Here, however, I am concerned with computing as a cultural phenomenon, something that was perceived to be massively out there but is now scholarship’s familiar.

I want to ask questions from the inside of that relationship, in the spirit of Warren McCulloch’s “experimental epistemology” (1960), though certainly not as a neurophysiologist. My experience, as happens, is with works of literature – though any other artefacts of interest in the interpretative disciplines would do as well. Unlike most humanists involved with computing, my concerns nowadays are less with particular artefacts than with what tends to happen when computing becomes part of the interpretative act (which is among the most intimately human things we do). How does the question of the human look from there?

Let me take advantage of George Miller’s article “What is information measurement?” (1953), where he remarks on the contribution information theory might make to experimental psychology:

In the first blush of enthusiasm for this new toy it is easy to overstate the case. When Newton’s mechanics was flowering, the claim was made that animals are nothing but machines, similar to but more complicated than a good clock. Later, during the development of thermodynamics, it was claimed that animals are nothing but complicated heat engines. With the development of information theory we can expect to hear that animals are nothing but communication systems. If we profit from history, we can mistrust the “nothing but” in this claim. But we will also remember that anatomists learned from mechanics and physiologists profited by thermodynamics. Insofar as living organisms perform the functions of a communication system, they must obey the laws that govern all such systems. How much psychology will profit from this obedience remains for the future to show. (p. 3)

Indulge me for a moment in a bit of philology. Note the word “obedience”. In its sense closest to the human this denotes (1) an act of the will, a submission, as when I am obedient to the wishes of an equal or near-equal; then, (2) a yielding to some force or agency stronger than myself, as I would to someone with a massively persuasive argument or coercive weapon; then, (3) simply a manifestation of a force or agency so strong and in control that the very idea of resistance is nonsensical (as it is to say that a tightrope walker “defies” gravity).

In what sense is the human interpreter of a work of literature obedient, and how does the best of techno-science’s most influential invention, computing, compel his or her obedience now, or seems likely to in the future? Like the “nothing but” psychologist, he or she may be doctrinally obedient [1,2] to a school of interpretation, but what about obedience [3]? If we suppose that the interpreter of a text uses a computer persuasively to discover statistically significant regularities that go against some readings but favour others, what then? (Such uses have been impressively successful for some time.) A close look at the relevant research shows, however, that analysis proceeds recursively, in a virtuous, hermeneutic circle in which interpreter and statistical model interact. So intimately resonant are the statistical tools and the scholarly interpreter – indeed, in some cases the interpreter has developed these tools gradually to suit the developing results – that it makes no sense to distinguish “the dancer from the dance” (Yeats, “Among school children”). It would seem, then, that the challenge for the digital humanities is to figure out how more effectively to move in that cybernetic direction, using tools as some say we always have, to leverage the evolutionary processes in our own development.

So is there then no troubling question of the human for the humanities? Is the talk of a “theft of humanity” scare-mongering, or only a theft so long as the disciplines fight over infinite treasures as if they were finite things? Budgets and institutional plans are painfully finite, so there’s one problem. Another is what Langdon Winner in “Technologies as Forms of Life” (1986) calls our “technological somnambulism”, our proceeding as if we were not being culturally remade from the inside. Another is that we really have no idea even how to talk about the challenge with which computing confronts the humanities because we lack the vocabulary with which to bridge critical theory to technological methods. But let me focus attention rather on the question Peter Galison raises at the end of his article on the inheritance of cybernetics, “Ontology of the Enemy” (1994), and again in Image and Logic (1996): as objects and techniques move across cultural boundaries and through time, how goes what he calls their “(incomplete) disencumberance of meaning”? (1996: 435f). After considering the historical origins of Wiener’s cybernetics, Galison concludes,

Cultural meaning is neither aleatory nor eternal. We are not free by fiat alone to dismiss the chain of associations that was forged over decades in the laboratory, on the battlefield, in the social sciences, and in the philosophy of cybernetics. At the same time, it would clearly be erroneous to view cybernetics as a logically impelled set of beliefs…. What we do have to acknowledge is the power of a half-century in which these and other associations have been reinstantiated at every turn, in which opposition is seen to lie at the core of every human contact with the outside world. (1994: 265)

In conclusion let me ask: where, then, do we stand with respect to the computational Other, a human invention with a past as checkered and complicit as cybernetics, for so long viewed as inhumanely rigorous, provoking the fear of absolute enslavement to the cold machine, or itself put (like some of our fellow creatures) so far beyond the human pale as to provoke the thought of conscience-free slavery to support a prosperous leisure for humankind? (In 1971, in a review in the Times Literary Supplement, Sir Geoffrey Vickers noted this temptation as the greatest impediment to realizing computing’s signal contribution to epistemology.) Is this Other (which we have made), as Bruno Schulz wrote about art in 1935, something which connects us to a premoral and precognitive depth at which human values and thoughts are still “in statu nascendi“? Is this Other us? Should we be scared, or welcoming, or what?

Works cited.
Bruner, Jerome. 1986. “Possible Castles”. In Actual Minds, Possible Worlds. 44-54. Cambridge MA: Harvard University Press.

Burke, Peter. 2000. A Social History of Knowledge from Gutenberg to Diderot. London: Polity.

Frye, Northrop. 1963. The Educated Imagination. The Massey Lectures, Second Series. Toronto: Canadian Broadcasting Corporation.

Galison, Peter. 1994. “The Ontology of the Enemy: Norbert Wiener and the Cybernetic Vision”. Critical Inquiry 21.1: 228-66.

—. 1997. Image and Logic: A Material Culture of Microphysics. Chicago: University of Chicago Press.

Goldstine, Herman H. and von Neumann, John. 1947. Planning and Coding of Problems for an Electronic Computing Instrument. Part II, Volume I of Report on the Mathematical and Logical aspects of an Electronic Computing Instrument. Princeton NJ: Institute for Advanced Study. library.ias.edu/hs/digiarchives.php (26/4/09).

Hilbert, David. 1996/1918. “Axiomatic Thought”. In From Kant to Hilbert: A Sourcebook in the Foundations of Mathematics, vol. 2, ed. William Ewald. Oxford Science Publications. Oxford: Clarendon Press.

Kenner, Hugh. 2005/1968. The Counterfeiters: An Historical Comedy. Normal IL: Dalkey Archive Press.

McCulloch, Warren S. 1960. “What is a Number, that a Man May Know It, and a Man, that He May Know a Number?” Alfred Korzybski Memorial Lecture. General Semantics Bulletin 26 & 27: 7-18. www.generalsemantics.org/misc/akml/akmls/26-27-mcculloch.pdf (26/4/09).

Miller, George A. 1953. “What is information measurement?” American Psychologist 8: 3-11.

Schulz, Bruno. 1998/1935. “An Essay for S. I. Witkiewicz”. In The Collected Works of Bruno Schulz. Ed. Jerzy Ficowski. 367-70. London: Picador.

Turing, A. M. 1950. “Computing Machinery and Intelligence”. Mind N.S. 59.236: 433-60.

Winner, Langdon. 1986. “Technologies as Forms of Life”. In The Whale and the Reactor. Chicago: University of Chicago Press.

Vickers, Geoffrey. 1971. “Keepers of rules versus players of roles”. Rev. of The Impact of Computers on Organizations, by Thomas L. Whistler; The Computerized Society, by James Martin and Adrian R. D. Norman. Times Literary Supplement 21.5.71: 585.

38 comments to Who am I computing?

  • Clare Brant

    Willard McCarty’s piece is wonderfully alert to how metaphors shape understandings of humanities and what they – we? – do. Turf, territory, even trees of learning: these old metaphors are spatial and organic, and maybe we turn to nature because deep down it reassures us (compare ‘green shoots of recovery’.) I’m struck by one of Willard’s metaphors, describing computing as something that used to be perceived as out there ‘but is now scholarship’s familiar’: he hints at witchcraft, a felicitous twist to the cliché of computer wizardry. Renaissance iconography has many saints who share humane technologies with beasts, like St Jerome in his study with a lion. These days our Other seems more mechanical than animal, but a familiar suggests a helper function, a complicity that idealises what computers can do for us. If once we could imagine lions sympathising with saints, crossing the categorical chasm between beastly and pure, why do we find it hard to cross between humanity and impersonality? Does Willard feel well when his computer is not sick?

    I think Willard is right though when he says we don’t have much of a vocabulary with which to bridge – another metaphor! – the distance between critical theory and technological methods. Is a binary of Self and Other helpful? Isn’t the Other also not a computer but a person using a computer? I think wariness ought to be more about humans, who are the agents of nefarious ends that computers act for us. Terence’s Chremes makes his declaration of fellow-feeling to a neighbour whom he has been observing, an old man whom Chremes clocks as disturbed because he’s manically digging. Chremes asks questions, ascertains his neighbour has a problem (he’s been unduly severe with his son) and in this scene generally acts like a therapist. Some think Chremes’ aphorism can be translated as ‘everybody’s business is my business.’ That’s less benign but just as human? As Joseph Heller put it, ‘the enemy is anybody who’s going to get you killed, no matter which side he’s on.’ I welcome Willard’s interrogation of the terms by which we try and know what we are; I’d also suggest we’re not all on the same side and the Other may be scariest when it is Other people.

  • Thank you, Willard, for your wide-ranging comment on our relationship with our constant companion, the desktop (or even more intimately, laptop) computer. Let me pick up what I take to be the core question you raise:

    What tends to happen when computing becomes part of the interpretive
    act?

    You go on to describe us humans as “obedient” to works of literature. The word is nicely chosen, for, if you are willful, as I am and other reader-response critics are, and indeed all readers are because we are humans with free will, we can make of texts what we choose, obedience be damned. Mostly, of course, we choose to conform to shared, conventional ways of reading. We choose to be obedient, but we are not compelled to obey.

    Does the computer change this? Not a whit, say I. No more than using a magnifying glass to read 8-point type.

    The computer is one tool among many for reading texts. It offers us dictionaries and concordances. that are on screens instead of on paper, but that is surely not a great difference. As you point out, some interpreters of texts use computers to discover statistically significant regularities. That is a new ability computers have given us, but we still read them by means of what you call “a virtuous, hermeneutic circle.”

    In short, I do not see that computers affect in any fundametnal way the humanity of our interpretive relationship to literary texts.

    There is, however, another issue lurking in your essay: what is our relation to these more or less intelligent machines? Many writers have engaged this question, notably Sherry Turkle, who points to the fluidity of personal identity when one is able to wander about as an avatar in, say, Second Life. I suggested in, ironically, an online essay, that our relationship to computers is a regressed human-to-human relationship, somewhat like our relationship to pets (oops! animal companions). And I am sure that there are many such possibilities to be explored. Using computers to explore literary texts, however, does not change, I think, our relationship to those texts when we interpret them.

  • Peter Garrard

    For me the most interesting questions are posed in Willard’s ciceronian apophases. Does the computer help us towards an improved understanding of human cognition? Is it a boon or a threat to the hard won rewards of careful scholarship? To which I would respond: ‘yes, but only insofar as it can be used to model the parallel architecture of biological information processing systems’; and ‘a boon to all but the professional indexer’.

    Having said that, I do agree with Clare that Willard’s theme of metaphor as a route to understanding the importance of humanities to humanity is rich with potential insights, but her development of ‘scholarship’s familiar’ into the realm of sorcery was perhaps a bridge too far. The desktop computer may be a household commonplace, but it is not an icon of domestic existence, perhaps because its capabilities are so diverse that it cannot easily be identified with any single dimension of home. Rather, it is a thing that could transform our lives if only we knew how to master it, but which more often we feel mastered by. So there is a need to associate it with an activity with which its relationship is more subservient.

    I also perceive a ‘tool deficit’ in the more abstract fields of scholarly and professional life. A musician is seldom pictured without his instrument, or a soldier out of uniform, and the image of a doctor is incomplete without a stethoscope draped around his neck like a priest’s stole. The computer has simply filled the vacuum left by the writer’s obsolete typewriter and unfashionable cigarette pack, and the absurdity with which ‘…dark vague eyes, and soft abstracted air’ would burden the image of a philosopher.

    Should we be scared or welcoming? Neither! Merely accepting of a convenient temporary symbiosis.

  • Thanks to Clare Brant for pointing to my familiar(ity) and sympathy with the machine; frustrating lack of words with which to think about it; and that oppositional metaphor of self and other. By “familiar” I didn’t quite mean to invoke witchcraft, rather to finger a relationship somewhere between familial intimacy, with its comforts and deepening if mostly tacit knowledge; and the spooky, defamiliarizing intimacy of something else — the human in statu nascendi, as Schulz said. I dance (or lurch) with metaphors around what is to us now a household appliance, and to some of us nothing more than that, to recover the historically well-documented nervousness of first encounters with computers when they were new and their threat to and promise for the human was a live topic. Hype from colleagues as well as salesmen has not only hidden the historical questions we need to ask, as Mike Mahoney used to say, but also desensitized us to possibilities once glimpsed. There is as much a challenge to the human from computing as there is from neuroscience or primatology, especially when we turn away from trying to pass the Turing Test to those more intimate matters. As far as language is concerned, I think we’re with Darwin as Gillian Beer describes him, trying to think against the grain of available language; I hope for his vigorous stamina. This language must, I think, come from fields where for some time people have been working with improvisational, experimental, conversational encounters. In 1984 Joseph Weizenbaum, writing for Reflections on America, 1984: An Orwellian Symposium, after cataloguing the many evils brought down to us from high tech as then was, declared that “The computer, like so many other things one might discuss, is a mirror in which certain aspects and qualities of contemporary America are reflected.” That particular mirror — no one thing but a scheme for the devising of indefinitely many things –gives us many Others. I am asking, what is their family resemblance? What human in statu nascendi do they show us?

  • Thanks for the magnifying glass. Allow me to run with this metaphor for a bit.

    Let’s say that my vision is such that I cannot read the 8-point type of footnotes, or such that I cannot see with the naked eye why it is that some “stars” (the ones that radiate a steady light) are actually planets like Earth. I’d think that the magnifying glass or the telescope would make an enormous difference to my understanding and would lead to many theoretical and meta-theoretical insights – because more is more than more. Such was computational linguist Margaret Masterman’s idea of computing, that it would serve as a “telescope for the mind” (“The intellect’s new eye”, Freeing the mind, 1962). We’re certainly not even close to it yet because, as Susan Wittig pointed out in 1978, the project to realize it ran into the question of what text is, which turned out to be far more difficult than the makers of concordances supposed. Jerry McGann has been asking that question again in a most stimulating way that runs straight into the question of the human – to paraphrase McCulloch, what is text that a human may read it, and what is a human that he or she may read a text? One goes from there, I would think, to the design of text-analytic tools Masterman would be happy for (and so would I).

    Thanks as well for pointing to the problem of language. When we use the word “computer” as the subject of an active verb it is of course shorthand for “with a computer I” do thus-and-such – but then we tend to forget the linguistic convenience and slide ever so easily toward what we really don’t want to say, that the computer is an independent agent. The situation is made much worse by the singular noun just emphasized. We know that there are an indefinite number of computings (again, thanks to Mike Mahoney for the insistence on this), and that computing is present-participial, but the singular noun monumentalizes our transient system. How do we keep the tongue from forking?

  • On machines and creativity.
    I’ve been interviewing for The Reader magazine the composer, Kenneth Hesketh. He is interested in creating musical structures – programmes or patterns – that become almost self-generating. To him they are creative machines.
    But there is a moment when, in his musical vision, the machine tries to do something more, something different, in seeking to go beyond the varied replication of its own pattern. And in Hesketh that is often a fatal moment, a tragic flaw, ending in the machine’s own destruction. It is like a re-creation of the Fall: the machine becomes human, but the human is deficient precisely in that attempt.
    I am fascinated by that transitional moment in our repeated evolution. Take, for example, Philip Sidney’s ‘Certain Sonnet 19’, beginning ‘If I could think how these my thoughts to leave.’ It is a poem of rejected love and what to do with it. As the poet tosses and turns, the lines and then the stanzas shift from what ‘I’ can do to what ‘you’ might do, both to no avail. Similarly one lines says ‘If’ this, the next says ‘Or’ that, going through a range of desperate alternatives. One way or another, says the poet, get me out of this dilemma!
    If either you would change your cruel heart
    Or cruel (still) time did your beauty stain:
    If from my soul this love would once depart
    Or for my love some love I might obtain . . .
    Track the almost computerized movements. ‘Change’ in the first line there is played off against ‘still’ in the second, the two underlyingly linked by the repeated ‘cruel’. ‘From’ in the third line is similarly played off against ‘for’ in the fourth, with ‘soul’ and ‘love’ trying to find the right places for themselves. But if the poem’s law is for every ‘I’ a ‘you’, and for every ‘If’ an ‘Or’, then that fourth line should be: ‘Or for my love your love I might obtain’. With quiet devastation, it is worse. The poet has given up hope of equality, can’t expect ‘your love’ as part of the ideal machine; instead only, at best, ‘some love’. Suddenly we see why that tiny word ‘some’ is born into the little world of this poem (‘in statu nascendi’). It is magnificently creative even in deficiency.
    That is to say: I think McCarty is right. It is not that this poem is ‘nothing but’ a machine, but without its machinery it also could not be more than a machine.

  • I hope Peter will forgive me for zeroing in on his comment about the mastery of or by computing and the need “to associate it with an activity with which its relationship is more subservient” (my emphasis). Because this is a Forum on the human, his evocation of the language of servitude or even slavery (which computing inherited from writings on industrial automation and automata) is particularly fortunate, and I want to capitalize on it.

    In my opening remarks I cited Geoffrey Vickers’ fingering of the temptation to put computers in the place of slaves. This may seem rather exaggerated now but certainly was in the minds of people then, such as the distinguished engineer Gordon Scarrott, who delivered the Clifford Patterson Lecture for 1979 entitled “From computing slave to knowledgeable servant: the evolution of computers” (Proceedings of the Royal Society of London 369.1736: 1-30). In 1966 the literary critic Louis Milic, in an article tellingly entitled “The Next Step”, saw as the major impediment to actualizing the potential of computing for scholarship this relegation of the machine to servitude. Noting all the progress with concordance production etc., he declared that, “These [are] good things, and scholars look forward to them, but satisfaction with such limited objectives denotes a real shortage of imagination among us. We are”, he declared, “still not thinking of the computer as anything but a myriad of clerks or assistants in one convenient console” (Computers and the Humanities, p. 4).

    Better a master than a slave? So much better, I would think, not to think in such terms at all – to respond to computing’s challenge to the human by finding in what sort of a tool computing is now clues to the kind of mastery a craftsman has with his or her chisel.

  • Peter Shillingsburg

    McCarty’s first question: What is human, beyond which is Other? is followed by questions upon which depend any answer: Who is asking? Who is answering?
    The tussle over turf precedes the right to ask and answer, What is human? What does that tussle tell us about being human? Does it suggest the human question begins with defining us as opposed to them, where Other is defined in terms of culture, religion, nation, field, or walk of life?
    Before tackling the big question, can we stop or settle the bickering amongst ourselves? Would stopping only put differences in abeyance? Is settling a possible goal?
    “My concern,” McCarty writes, “is . . . with the deeply engrained way of understanding what one is or where one stands by opposition to an alien nature or place.” He uses metaphors to understand such things, and his work on modeling (see Humanities Computing) shows how important picturing is to understanding. The second part of his essay provides one model for understanding Other as The Alien (not hostilely, but with Emerson’s tone, distinguishing the me from the not me), but in fact McCarty raises internecine human bickering only to boot it aside–”My concern, however, is not with epistemic warfare or diplomacy . . . .” Perhaps that is the only way, for if we try to settle first, we may never get to the big question.
    If I followed McCarty into that second discussion, I would express agreement. But what struck me in the first part was the importance of oppositions for understanding. We are famously stuck in the black box of language, where words and their referents are understood as over against their counters. Black is not black because it is black but because it is not white or any other color. It is true but no help to say black is not spinach or witchcraft, because the notion of a relevant counter word seems fundamental to how understanding works. “What is human?” is first a language question, and words come in counter pairs and groups. Metaphors, too, work with relevant counter images. The trick is to know what is relevant. And there point of view complicates the issue. Can all cultures, religions, nations, fields of study, or walks of life ever find a common position from which to look, rendering the question “What is human?” answerable for all?

  • Wolfgang Kaltenbrunner

    Recounting a history of how humanists constructed their identity as human beings and researchers in opposition to an invading other, in this case computers, Willard McCarty proposes to think about the latter as malleable objects, rather than monolithic entities. Yet another metaphor for computers as tools for research is that of a prism. When looking at the object of study (e.g. literary texts) through that prism, the user’s perspective is refracted by his/her very own goals, methods, and history. Rather than colonizing the user’s view on literary texts, computers provide way of looking at one self by looking at the text. This potentially induces a (healthy?) crisis if one is not quite sure what one actually is, or, if one is not even sure that one is only one instead of many. While some for example argue that the task of the humanities/literary studies consists in critically reflecting on normative truth claims produced by various (e.g. scientific) ideologies, others have decided that the humanities should produce object-related knowledge themselves, and now are negotiating the respective epistemological conditions. Within the latter group, a typical opposition is the one between broader cultural studies-inspired approaches and the proposal to ‘return to philology’, so as to increase the empirical validity of results in a limited field of enquiry. Depending on individual purposes, the computer can appear as an amplified manifestation of one’s rational super-ego, or merely as a rather benign familiar taking over subordinate tasks in everyday research practice.

    At the same time, it is dangerous to argue that the use of computers depends on individual preferences exclusively. In reality, the implementation of digital tools for humanistic research means putting into place large-scale, and frequently compelling, infrastructures by research policy. These technological infrastructures potentially help impose an epistemological infrastructure influencing the way humanists will think about themselves and their research goals in the future. The question then is, who should it be that puts the prism into the socket? Whose identity do we want to be refracted through it? Should the infrastructure mirror/amplify the preferences of research policy, humanists, or should it allow for an utmost degree of personalization? And in the latter case, how could the humanities’ right of self-determination be reconciled with the expectations that the rest of society formulates towards them (and which are expressed very tangibly in funding regimes)?

  • The question of how we feel when contemplating a primate, an intelligent machine or a disembodied intelligence like HAL in ‘2001’; how we feel when confronted with the demystifying, but also perhaps dehumanizing impetus of a program which ‘tells us’ which patterns are at work in our favourite piece of art/literature/music or worse, which evolutionary algorithms are at work in our supposedly free, romantic life of affect and hope – this question, and the series of tensions and oppositions it highlights, seems crucial to Willard McCarty’s allusive, light-footed reflection on ourselves and our computational Others. I shall respond with a series of oppositions which I’ll mention in order to suggest that they don’t really exist – either they no longer do, or they never did. In that sense I worry less about a tension inherent in ‘humanities computing’ than some might.
    For one thing, I don’t think our fascination with Nature is necessarily the contemplation of an Other. It’s not just that we seek to humanize it in the sense of technology, industry, comfort and commodities (hence the relief expressed by someone arriving on an apparently deserted island and exclaiming, as Kant cites in the Critique of Judgement, “Vestigium hominis video,” I see the traces of a man) but also that we anthropomorphize it by seeing it as meaningful, possibly orderly, possibly chaotic. Whether we are ‘ecologically minded’ and seek to use Nature as a norm, as a source of value, or we are ‘Darwinian’ and we seek to understand the complexities of animal life in terms of natural laws, we view ourselves as parts of Nature, to use Spinoza’s phrase. Hence I quite appreciate Willard’s phrase that in the present day “our physical machines belong to us”.
    Hence I am puzzled by some of the remarks concerning the humanities here, since it seems to me that, from Panofsky to Eco, from Ernst Jünger to Peter Sloterdijk, from C.P. Snow to Terry Eagleton, or conversely, from Ernest Renan to G.E.R. Lloyd, the humanities have shown themselves to be very adaptable, good at shedding their own skin, and not particularly reactive. And the distinction between promoting alternate forms of human possibility (a kind of Joycean vision, at least that of Stephen Dedalus) and a strictly utilitarian vision disappears when we find cognitively inclined philosophers such as Daniel Dennett speaking of a “narrative concept of self,” or maverick scientists like Semir Zeki and V.S. Ramachandran looking at “neuroaesthetics,” not to mention the emerging trend of ‘transhumanism’. To use terminology which isn’t present in Willard’s contribution but could have been, these various endeavours invalidate the traditional distinction between the quantitative and the qualitative, in their own ways.
    As Antonio Negri has noted, there is even a hidden commonality between humanism and anti-humanism: the classical humanists sought to define ‘man’ by his intellectual power or capacity, rather than by a substance; and what was the basic message of anti-humanism in the second half of the twentieth century, other than to emphasize that ‘the human’ is really nothing other than a capax mutationum, including an artificialist capacity to integrate ever-greater forms of hybridity?

  • Musically I think of Mozart’s interplay of expectation and surprise and of Wilfred Meller’s piece on mathematics and music, “Tuning in to the natural law” (7th in the fine series, “Thinking by numbers”, Times Literary Supplement, August to November 1971). I think also of the story of the Fall in the hands of John Milton, where it becomes a condition of being one can never quite think beyond while simultaneously sensing “a paradise within thee, happier far”. Philip’s gem is all the better for the simplicity with which it gets to and turns on his phrase (I abbreviate), “without its machinery, not more than a machine” – which I am herewith borrowing for an upcoming lecture. I am astonished by how thoroughly and how often, by how many intelligent people, the via negativa that is computing’s greatest epistemological and ontological gift is not seen, not taken up, not used to ask better and better questions about the human. Have we forgotten trial-and-error? Learning from mistakes? Seeing absence is a dual gift, a collaboration between systematic rigour (so conveniently modelled computationally) and creative insight. It may be useful sometimes to think of this as an opposition, but Philip’s “without its machinery, not more than a machine” is more intimate, more of a harmonizing – like the friendship of Gilgamesh and Enkidu emergent from their fight, perhaps.

  • Øyvind Eide

    My starting point for understanding the relationship between humans and computers is the relationship between humans and other organisms, which is, other humans, animals, and other parts of nature. This has to be seen in light of our relationship to tools.

    The relationship between a human and nature can be seen as a part-whole relationship. All organisms have grown out of nature. Yet any organism will change nature, if possible. Nature is the sum of organisms, and is created by the sum of organisms. Mankind is changing nature quite a lot, but probably less than e.g. algae.

    Our environment is overlapping with nature, but arguably includes parts that are not part of nature. Are our tools, obviously parts of our environment, also parts of nature? They are generally seen as material culture, thus, non-nature. On the other hand, tools used by animals will be parts of nature, cf. a bird’s nest or a primate’s wooden tools.

    Is a dog used for hunting a tool, a companion, a colleague, a pet, a friend, a slave or a servant? Maybe equally important: For the dog, is the human taking part in the hunt any or several of those? Maybe he is just the thing opening the cans, thus providing food.

    When we use the dog as a tool, why is it different from the computer? One obvious difference: The dog has it own ideas and wishes. When I say the dog has an agenda, I speak literally. When I say the computer has an agenda, I speak metaphorically. How important is that difference? Humans are “constructed” to interact with other organisms, human as well as non-human. Do we in our practice put the computer into that role of an other with an agenda? Is the computer a type of tool that is an extension of our body, like a pen, or is it a kind of tool seen as a colleague, a hunting dog?

    Who feels unwell when our computer doesn’t work? Who feels unwell when more and more of their reindeer are catching a disease? Who feels unwell when the neighbourhood forest is dying from pollution? This is not an argument for the computer being either kind of tool. Anyone living in a very hot or very cold environment fells unwell when their car do not start. Anyone using tools in dangerous work has to trust their tools – if the wire breaks, they die.

    If there are differences between how different people and different cultures relate to the animals the have or use or know, why should we not relate to computers in equally different ways?

  • Did I boot aside that turf, that feudal “token or symbol of possession” (OED)? I think of it more as an antiquated burden that however descriptive of our administrative state of affairs is intellectually something we walk away from without spending the energy to kick. The trick, you say, is to find the relevant counter-word or metaphor. Then you ask, given the many points of view, is there a common place from which to ask what is human? Good question.

    How about this: the commonality of the asking. How about the great ethnographic historian Greg Dening’s favourite metaphor of the “beech-crossing” or paradigmatic encounter of self and what for the moment at least is other. What about the present-participial engagement in the act rather than the presumed consensual standpoint from which questions may be asked and answers received? Is there commonality in the asking of this question?

    I’d think that the style of asking varies. But here we are on the ground being so masterfully explored by Geoffrey Lloyd, for example in Cognitive Variations – and (forgive the plug) by a gathering around his work to appear in Interdisciplinary Science Reviews 35.3-4 (September 2010), http://www.isr-journal.org.

  • Again, this is at least in part the very good question Peter Galison asks in “The Ontology of the Enemy” about the birth-burden of cybernetics, which we can easily find carried further in war-gaming – so far, in fact, that the influence has for some time now gone the other way, from popular commercial gaming into the military for training purposes. It is also a question that runs through the writings on what Davis Baird has called “thing knowledge” (in a book by that title). I would think that the question is trickier than we might at first suppose, varying by the thing in question (think about the hammer especially, or about the knife). It becomes intricately tricky for computing systems. Choices are made and ideas implemented at all levels of a system from basic circuitry on up. At what point do we say that these choices result in an intentional object whose built-in intentionality can be read? And this suggests a similar line of questioning about biological systems. Both run into the phenomena of emergence, which (I am guessing here) means examining the stochastic relationship between individual events and the emergent phenomena.

    At a much (technically speaking) higher level we encounter the worrying of humanists about how commercial software pushes research in particular directions because of what it allows to be done or done easily and what it makes rather more difficult to do. Because we humanists have little money and software development is expensive, we have to be clever as Thales.

    At the level of systems we encounter – especially interesting for this Forum – the problem of what the human is thought to be by the designers. Read, for example, what early researchers in machine translation (MT) said about language, and you will wonder why a great deal of money was not saved by asking poets and professors of literature for advice. (Perhaps that advice would have been so discouraging that the MT project would have been still-born and so much fine research never undertaken.) For an early, gross example, see Edmund Callis Berkeley’s Giant Brains or Machines that Think (John Wiley & Sons, 1949). Berkeley declares that “we have to define thinking by describing the kind of behaviour that we call thinking”. He cites as examples adding numbers (which a machine can do), reading a signpost and making a logical choice (which, he claims, a machine can do), learning and remembering (which he equates to storing information and referring to it, and so claims a machine can do). “A machine can handle information; it can calculate, conclude, and choose; it can perform reasonable operations with information. A machine, therefore, can think” (1949: 4-5).

    My point is not to evoke a dispute about AI, its improvements since then etc. Rather it is to highlight how, under various personal and professional pressures, the human is radically simplified so that “the machine can do it” can be proclaimed. In the rush to seem to have reached the goal, the human beyond the machine is forgotten. Systems are then designed and installed, and we live under their sway. That’s the worry, I take it.

  • Back in 1999 (a symbolic year in our techno-futurist past), I was asked to give a talk with a stipulated title: “The Internet Jitters.” In that talk, I tried to demonstrate that our relationship to the “technological other” is wildly uneven:

    One occasionally encounters a person with a penchant for antique firearms, as well as those who prefer the simplicity of herbal remedies to antibiotics, but in general, most of us regard the Gatling gun as bad technology and penicillin as good technology. Our cultural anxiety over the new technological order isn’t centered on weapons technology or advances in medicine. It isn’t about our ability to make automobiles more safe or armor piercing bullets more deadly. We debate and discuss these things, of course, but they don’t move us to invoke the ancient genres of elegy and apocalypse.

    Our apocalypticism is reserved for ATM machines, email, and cell phones. Pay-at-the-pump gasoline islands, pagers, and web addresses. Distance learning, video phones, and MP3s. Electronic books, electronic classrooms, electronic surveillance. E-business driving e-tailers in the e-economy. Our elegies are for the book and the fountain pen, for the Sunday morning paper and the on-the-street political leaflet. Typewriters and printing presses. Papyrus, parchment, and vellum. Scented love letters and libraries. The slow, deliberate loops of cursive handwriting.

    Our anxiety, in other words, is specifically about information technology. Which is another way of saying that we are worried (as worried as we are excited) about what will happen if we change the ways in which we communicate ourselves to one another.

    I did what I was implicitly asked to do, which was to assure my audience that all would be well. I suspect the organizers of this site are hoping for the same. What else, after all, could one intend with a forum “dedicated to improving our understanding of persons and the quasi-persons who surround us,” except a calm sense of acceptance. We feel we must dedicate ourselves to the understanding of persons. We are terrified of quasi-persons of any description.

    McCarty is too smart for this game. His question has the donnish manner of the senior scholar impatient with the dissemblings of the young (“Should we be scared, or welcoming, or what?”), but it is nonetheless a serious fear that he is wisely unwilling to alleviate. It is the same exchange immortalized in David Kronenberg film, “The Fly.” “Don’t be afraid,” says one character, after confronting the horrifying spectacle of a man being turned into an insect. “No,” says another. “Be afraid. Be very afraid.”

    Shouldn’t we fear being replaced by machines? Are we not within our rights to be utterly horrified by the thought that human consciousness will increasingly seem an unexceptional phenomenon, and that humanity will lose the rights, privileges, and immunities traditionally reserved to the “crown of creation?” Surely anxiety is the proper response to the idea that our technologies might force us to become increasingly isolated from one another.

    Truth is, there is nothing to worry about. Nothing at all. It has all happened many times before. Once revolutions have had a chance to metastasize through the body politic, people invariably think it was ever thus and so. If we are to be rendered insane, we will nonetheless possess the necessary gift of this condition (lack of awareness of our own insanity).

    The scholars job — much like the job of the artist — is perpetually to be asking absurd questions. “Who am I computing?” is a good one. Should someone else arrive, I will respond as Wittgenstein once did: “This fellow is not insane. We are just doing philosophy.”

  • Robert Knapp

    Willard McCarty’s elegantly lucid position paper raises so many questions, deep and broad, that self-restraint in response feels masochistic. I hope it will not seem too perverse to begin by putting a little pressure on his title: “Who am I computing?” The problem in parsing this query seems obvious. Should one hear an implied comma after the first person pronoun, thus inviting examination of how the (intransitive) act of computing, of interacting with complex hard- and software, modifies the (human and/or humanistic) self? Or should one hear “computing” in a transitive sense, taking as its object what would otherwise function as a subject complement (“who”)? The first alternative yields a tame and naively literal reading, while the second aggressively, and with apparent figurative intent, violates norms of usage, not only forcing an interrogative pronoun in the nominative case to work as an object, but also implicitly converting that object (and the humanistic self to which it once referred) into a special kind of “what,” a thing to be computed, a thing to be understood as a congeries of digits.

    I have no doubt that Willard expects us to take this query in both senses, and to feel ourselves suspended in an undecidable oscillation between opposing characterizations of what it means to be “human.” But is this opposition truly binary? Consider a person riding a horse. There is an old tradition (for which Xenophon’s Cyropaedia could supply a token) in which the horse instantiates “brute” nature, in relation to which the equestrian serves as rational governor. But to the good rider, the horse-as-Other is not only “out there” to be reckoned with as often helpful and sometimes dangerous, but is also “in here” as a collaborative intelligence and will (or additional servomechanism, as it were) within a transient but more than merely figurative entity, in which “obedience [1, 2, 3]” all operate interactively: an ancient equivalent of the cyborg might be the centaur. “Who am I riding?” thus implies an oscillation in some harmony with that of Willard’s original question, yet it need not invoke the metaphorics of battle, need not invite territorial struggle, need not force a choice.

    One last question. “Who” can parse Willard’s title? I would venture this answer: not any machine, not by itself. To hear and feel the oscillation between literal and figurative, to sense the boundaries of normality being reworked by and through that oscillation, also requires “persons.” Following Marjorie Grene (and in various other ways Stanley Cavell and Jacques Derrida), all of whom resist the Cartesian dualism that ultimately seems to undergird Norbert Wiener’s sense of science as an unending struggle against the demons of disorder, I want to say that “persons” are responsible agents made by their embodied and temporal participation in culture (and culture, of course, includes tools): we do not “know” ourselves except through the language that (as Cavell says) we are “fated to.” How we know ourselves, these selves made through culture, and how we acknowledge one another, are not matters (merely) of algorithmic computation, but of response–of interpretive and ethical decision–at moments when computation comes to an end.

  • Willard’s very thoughtful paper presents the computer always as the Other to which we must respond. Even if it is always changing in its status, it is an opponent to be resisted, accommodated or embraced. I’d like to contest this positioning of computers to suggest another reading of computers not as Other, but as as the equipmentality into which we are thrown, the ready-to-hand of everyday practice.

    You probably noticed I’m starting writing into the space that Willard’s paper opens up from my own space, having just finished the Technocultures lecture on Heidegger and computing. I’m not a true believer, but I do think Heidegger’s complex legacy to the study of computing might complicate the opposition of Humanist / Other that figures so strongly in this paper.

    Of course, using Heidegger to defend, or refigure, modern technology is paradoxical. Surely computing is the very incarnation of the instrumental, ontic world picture that Heidegger warned against: modelling the world to destine the way that things are revealed. Dreyfus’s 1968 attack on AI summons all this critical force to reveal the limitations of the rationalist underpinnings of that brand of AI.

    However, the reading of technology the later Heidegger makes is quite different from that made in Being and Time, in which the human-technology relationship is more complex. What if the world that the practices and resources of science and technology generated became ready-to-hand. Heidegger himself talked about television becoming an everyday object even though only a small number of physicists understand it. What if we have never been Technological (throwing in some Latour)?

    So I would argue that computers are barely Other at all. It takes the kind of special critical attention that Willard has shown to make them Other. As Michael Heim says, computers are component, not opponent. However, this questioning of the mediation of thought and human practices is very important. It is important in computing design (as Winograd and Flores show in Understanding Computers and Cognition). It is even more important on the other side of Snow’s Two Culture split between science and Humanities, because the practice of many areas of physics, chemistry, biology and other ‘hard’ science fields has become that of using computers. The lifeworld of science has changed more dramatically than that of the humanists, who tend to adopt technologies with critical attention. The Science Wars only revealed the blindness of many scientists to the ontological transformations they has opened up in the quest for ontic knowledge.

    I part company with Heidegger most definitively in his tendency to advocate retreat to a more predictable, pre-technological lifeworld. Even then, he leaves some tantalising scope for a ‘saving power’ closest to the danger. If there is a poetic dimension to the use and design (often the same thing) of computers, it is in this critical, ethical mode that contemporary humanities scholarship has a capacity to contribute. Thanks, Willard for a paper that I think does exactly this.

  • Tamara Lopez

    I guess in considering the Other, it is acceptable to also consider the Self, and so I’ll respond by telling you my own experience with and as the computing Other.

    I came to computing in 1995 at the start of the present wave of activity around the internet, and so for me, computing and working with computing has always been in the ‘high-level’ languages. Timothy Steele has written, “[m]eters reflect patterns of speech that occur naturally in language”,[1]. For me, learning to program began with reading the work of others,[2] in learning to separate the patterns of instructions from the expressive acts in the code, and in understanding references made to older, other works. The computer was very much an Other, but no more an Other than the work of poets I struggled with in English courses taken around the same time, texts of whom also had intricate meters and rhythms, pregnant metaphors and allusions to Minds past.

    In reference to the mechanics of scansion, Steele also writes “[e]xperienced poets rarely think of these technical issues when they are writing”,[2]. While I don’t believe I can claim such mastery even now, I have gone some distance toward being able to recognise when, why and how to use the languages in which I work. On my best days, I am able to reach out to and pass through the layers of text written upon the machine, to inspire to action the glittering circuitry at its core.

    So Obedience to the computer has never been an experience of mine, in any of the senses you cite, though I will concede to a respect for the work of those who went before, and to the languages they developed. For reasons I don’t completely understand, this fragment of a poem by Peter Abbs started spinning in my head while I was reading your essay:

    Where, God, will you be when I am dead?
    I am your listening ears; I am your glancing eyes.
    I am your tongue through which you taste your earth…
    What will you do without your scribbling messenger?
    Will you continue blind and alone? [4]

    I use language like ‘inspires’ and ‘glittering circuitry’ to describe my relation to the machine, but I wonder how it would describe its relation to me? Ignoring the seas of words written about Asimovian ethics, machine will and Man’s domination of Nature, is it appropriate to consider how the machine is obedient to us, to ask how we enslave it, to wonder where we would be without it?

    Thanks, Willard.

    —-
    [1][3] from ‘The Forms of Poetry’. The Brandeis Review, 12 (Summer 1992), 28–33. Available at: http://instructional1.calstatela.edu/tsteele/TSpage2/Forms.html

    [2] An approach that Eric S. Raymond has advocated, though I can’t claim I learned it from him. Instead, I grasped and groped my way and then recognised my actions in his words.

    [4] from (I believe) The Nomadic God, which appears in the Flowers of Flint. This excerpt was lifted from a review of the volume in the Economist that appeared in 2008.

  • I begin with Charles’ oppositions that don’t really exist or do no longer, and I am grateful for the mutability starting here allows.

    In the physical world we identify opposites (day and night, positive and negative charges, weights in a balance) that through successful theorizing become stable features of the world and allow us to do many things. Most of us are driven by desire to find personal opposites, sexual partners, friends, sometimes enemies, seemingly whether we like it or not. (This, at least sometimes, is my obedience[3].)Other opposites come and go more easily. My own querying of opposition has to do with a technologically opposed machine that within the last 20 years has become “personal”, to such a degree that I can with little to no effort attend from it to other things (as Michael Polanyi says) – except when it falls prey to bugs, viruses and hardware failures, when I am then suddenly disabled from attending to my work. Hence my answer to Clare Brant’s question of a few days ago: when my computer is working, I hardly notice it at all, so busy am I attending, for example, to writing this note. I feel well but don’t attribute that feeling to the operational machine.

    I very much like Charles’ term capax mutationum for the human, by which the machine which concerns me is included. But as a modelling device, this machine inherits the peril emphasized by so many who have written on the topic: that the modeller mistakes the model of the moment for the thing modelled. It seems to me that one of the tasks of the researcher (scientist and humanist alike) is to maintain a simultaneous or at least rapidly alternating bipolar awareness of both model and modelled – a critical, comparative awareness. And so again it becomes necessary to oppose one thing to another: the human, however constituted, and the machine modelling the human.

    Ian Hacking has written in various places (including Interdisciplinary Science Reviews) about incorporations of devices into the body. In some cases, such as the Pacemaker, these are forgotten, and the augmented person simply lives. The same happens with “ubiquitous computing”, mobile phones, the whole technological infrastructure on which urbanites depend. What value is there in staying aware of the infrastructural extentions of the human? When we turn from modelling of, for or toward something to simulating a process we can or dare to describe algorithmically, to see where it goes, or we simply attend to whatever the simulation produces (e.g. music from an iPod; a VR representation of an ancient theatre; an activity in Second Life) the simulating itself may vanish from attention as thoroughly as the Pacemaker or the mobile network.

    Charles’ generous tent of things and happenings called the human seems a beehive of activity. I guess what I want to insist on are the moments of critical awareness when the hammer or chisel or computational model from which one is attending becomes that to which one attends. Life does seem to provide such moments without the asking, but still I’d say we have quite a job on our hands making sure that the human with the computer is at least equal to the one without it.

  • Geoffrey Rockwell

    Willard McCarty wants to ask questions about how we see the human from within a relationship with computing. I’m tempted to take him up on the optical metaphor and look at where our gaze is turned. We could put this is as an interface problem – how are we presented to ourselves through the double mediation of literature digitized and rendered by a computer? This reflected gaze surely says something about the image maker, the desires of the human to see ourselves projected on the other. In particular, the extraordinary proliferation of social network visualizations seem to be new way we can see our social selves laid out like webs. These visualizations have been made possible by the increasing use of information technologies in everyday life and our communications that can be logged, often with our acquiescence. What do we hope to learn about ourselves as social?

    “in some cases the interpreter has developed these tools gradually to suit the developing results – that it makes no sense to distinguish ‘the dancer from the dance’…”

    I want, however, to turn to the cybernetic challenge McCarty poses about the gradual development of the very tools we use to ask questions about the human through computing. How could these evolving instruments tell us about ourselves? What does the telescope tell us about the human if it is aimed at the stars? What does a text analysis program tell us about interpretation if it is aimed at a text?

    Well, one thing they tell us is that tool development itself is contested. John Unsworth in “Tool Time, or ‘Haven’t We Been Here Already’” confronts the possibility that we could be back making tools that don’t work again,

    “So: are we back where we were in 1996, looking at an old idea in a different presentation, even though it didn’t work the first time? … We still have the social issues–how to structure a successful collaboration; how to engage end-users in design; how to sustain such a project over the long run, through inevitable changes in institutional priorities, computing environments, and personnel.” (2003, http://www.iath.virginia.edu/~jmu2m/carnegie-ninch.03.html)

    Unsworth confronts the possibility that we keep on failing to get tools right in order to make room for another tool project, but he doesn’t ask if the continual experimentation with tools is a way of asking through making. To simplify the discussion, we have on the one hand the infrastructure model that proposes to build a lasting ground of tools with which others can interpret texts (available through participating digital libraries) without having to worry about development so they can get on with the work of interpretation. Such projects are worthy even if successful infrastructure becomes transparent and in its transparency becomes harder to interpret.

    On the other hand is a model, which we are warned is too expensive and for which we are not supposed to be trained, where we build the tools (often from other tools) along with the encoding of the texts we want to interpret. The interpreter takes responsibility for both and gradually models interpretations out of code and text. I read McCarty as trying to open this as a possible and very human way of learning through experimentation with computing. This way will never give us the certainty of science where the instruments are not also the subject of inquiry. This way calls us to keep asking about the tools with which we compute the human. The hermeneutical circle is its virtue.

    For this reason I would like to ask a question discussed on Humanist starting in December of 2008, how is a thing (like a tool) knowledge? Can a thing be a theory? Assuming tools bear knowledge, we have to ask how we would interpret the very tools we wish to use to interpret texts. There will be no efficiency in such questions, but a more human hermeneutic. For those making their tools as they go, the asking about the tools happens in the making rather than after – a reprogramming of the computer less likely to scare us from interpretation. We might ask in the making, “with what am I computing?”

  • I like Willard’s notion of subjects and whole schemes of education waxing, waning and sometimes being on the defensive. “Humanities on the defensive” often means the study of English (in the modern English-speaking world) and Leavis represented the high point of the subject’s claims to be core of learning, and his attack on Snow’s “Two Cultures” essay was an rejection of the rival claims of Science, just as Maurice Bowra had felt on behalf of the Classics, when he hoped that laboratories would collapse and scientists in Oxford die.

    But all this is a bit ahistorical and anglo-parochial (the view that is, not that Willard holds it): the medieval trivium was grammar, logic, and rhetoric
    to be followed by the quadrivium of arithmetic, geometry, music, and astronomy, none of it much like a modern foundation in the humanities. Even now in Germany, everything is a branch of Wissenschaft, albeit split between Geisteswissenschaft (roughly the humanities but with history falling outside) and Naturwissenschaft.

    I am sure the humanities can look after themselves, however they are classified, but I share what I take to be Willard’s view that computers, not only as the magic of the web, which is now essential to us all whatever our vocation, but the possibility of yet more intelligent machines as part of the ongoing but slow march of Artificial Intelligence—in whose ranks I count myself—that, I take Willard to be saying is also a vast human achievement and with consequences for our minds and understanding of ourselves, and everything, that we cannot yet see.

    I think a key element, one close to my heart in Language Technologies, is that of rhetoric, in the sense that we can make things mean what we want; Humpty Dumpty should not be seen as a figure of fun in Alice—remember the issue of whether we or the words are the masters—-but as a sophisticated analyst of skills we have always had. Spinoza’s philosophy can be seen as a vast exercise in making us consider that the word “God” can really mean Nature. This is a power writers have, and no amount of text statistics can constrain them or take that power away.

  • I’m coming late to the discussion, and much has already been covered but I want to pick up on Willard McCarty’s point about what can distinguish current discussions from those in the times of mainframes and first-generation cybernetics: computers are no longer out there, an alien or human-made phenomenon to be observed, but “tangibly in here.” That seems right to me (as I type on my notebook, accessing the Internet through a wifi connection at a cafe several thousand miles from where I live, with a limited time before my battery runs out, for reading and response ). Writing machines are tangibly everywhere, pretty much. Yet many of us, humanists by profession, continue to define our role as analytical and observational, rather than participatory and collaborative. One reason for this, surely, is the expense of software development, also noted by McCarty. That leaves us ‘obedient’ not so much to an overpowering force or instrumental worldview but to off-the-shelf software that rarely is developed with literary concerns in mind, and whose aesthetic rarely has much to do with ideas about art, culture, psychology, and society that characterize the Humanities. That disconnect between traditional humanistic thought and computational intelligence does not seem to me (in this sense) to indicate a separation of the human and the non-human, or a lack of machninic understanding by Humanists, so much as a failure of Humanists to participate in the construction of the machines we now nearly universally use.

    (McCarty’s point about the failure of the researchers in that one 1940s AI project to consult with poets and scholars about language, is again to the point).

    My deep sense is that these problems will get worked out, seriously and maybe even satisfactorily, when Humanities scholars link up with programmers. (I know, I’ve been told by Deans, “Programmers are expensive!”). When it becomes common for Humanists to draft, not essays only, but software designed specifically for scholarly purposes, I think the debate will shift to the ground McCarty wants to reach, in this essay.

    My own interest, of late, has been to develop a Directory of Electronic Literature – that is, an archive of works that were created IN electronic environments (“born digital”), but an archive that tags works according to their conceptual content, and attracts commentary and criticism, so that the field development can be observed by reading and contributing to the Directory itself. The Directory isn’t built yet, but I am learning that most of the assumptions about literature I bring to the project are being tested – if only because I need to come up with a metatag for any of the familiar concepts I want people to access in the Directory. It’s disconcerting to think, that if nobody actually names or codifies concepts that have proven valuable in the formation of the Humanities, these concepts won’t circulate.

    There will be things that we’ve done, in our Humanistic disciplines, that we won’t be able to do in digital environments. Narrativity, for example, doesn’t translate all that well, at least few of the entries I’ve read for the Directory have narratives as compelling as those I’m familiar with in print. (But maybe that’s just me.) Other things, like formal characteristics or the technological constraints under which a work was composed, tend to get more attention.

    The difference between what conceptions circulate and what is blocked, won’t be articulated by a better understanding of the human and the inhuman: these distinctions will become evident by the degree to which Humanties scholars participate in the machinic communications networks that now define discourse.

  • Øyvind relativizes the relationship to the machine by considering the relationships we have to other entities — and so takes me into deep water.

    Isn’t the question one of the degrees of freedom we have, determined by how successful we are in acting a certain way? This makes the question in part a social one, as the focus on relationships would suggest. I once knew a woman who treated the world around her (New York City) as intentional and mostly alive. Her car would run as long as it – to her a “he” or “she”, I forget which – was happy; when it got unhappy, for whatever unfathomable reason, it would be made happy again by feeding it petrol. To her electricity was omnipresent, flowing through the walls of her house in Brooklyn Heights; sockets were simply put there to make connecting with the electricity convenient and neat. She was free to think like this because she was wealthy and basically could do and say whatever she wished, but surely her relationships to others were affected. Had she been considerably more bizarre she might have had much less freedom (i.e. been confined to Bellevue Hospital or some place similar). On a hunt for food, the way I construe my relationship to a dog accompanying me can have serious consequences, as can my relationship to my gun and to the deer I am tracking – but not necessarily. In other words, these relationships are variable but variously constrained, depending on the other in question and the situation of the moment. My question, I suppose, is how loose are the constraints a computer imposes across all situations? In an artistic sense, what sort of a medium is it to work in?

    This is also a question of the human, not only in the psychological sense. What I think humanity is, or the qualities of humanity I manifest without thinking about it, is how I relate to the machine or anything else in the world. If, like Edmund Callis Berkeley (co-founder of the Association for Computing Machinery), I construe thinking as “computing, reasoning, and other handling of information”, information as “collections of ideas – physically, collections of marks that have meaning”, and handling of information as “proceeding logically from some ideas to other ideas – physically, changing from some marks to other marks in ways that have meaning” (Giant Brains, or Machines that Think, New York, 1949), what will the computer be to me? What sort of a scholarly world will I inhabit? With notions like those, kept to strictly, would I have any chance of survival in the wild? In society?

  • Fear and anxiety are important clues, but of what? That is the question. There are many ways of turning such questions aside. One is sleep (Winner’s “technological sonambulism”), for which reassurance can be a preparation: it’s all right, nothing bad will happen, you can go back to what you were doing without worry. One example out of many during the early years of computing is Franklin J. Pegues’ declaration that “The purpose of the machine is not to dehumanize the humanities but to free the humanist for the important work of literary criticism by providing him with large and accurate masses of data that may be used by him in the work which only he can accomplish” (The Journal of Higher Education 36.2, 1965, p. 107, emphasis mine). This seems very likely to remain true for a very long time, i.e. that we will not be replaced as feared, but why the reassurance, repeated again and again throughout this period? What does the fear of scholars then point to? Another way of turning the question aside is the hype we know so well, which has made us numb to the anomalies on which some of it is based. Another is endlessly named and embellished fear itself, fear lodged in something that contains and domesticates it.

    We are endlessly good at not facing the hard questions.

    In other words we need to know what sort of worrying keeps the eyes propped open and stoke it for all we’re worth, but boot aside (thanks again, Peter) the worrying that paralyzes or paradoxically soothes. Does total and complete enlightenment lead to uninterrupted bliss? No, I recall someone saying, not until all of us have attained it.

  • I have the feeling that Borges has written the story I should be invoking at this point to comment on Robert’s fine comment, especially his figure of harmonic oscillation between horse and rider: the horse such as we know; the centaur of myth; the human lovers in perfect, dynamic union. Is that the trajectory we’re imagining? Actually I think it is. Quite aside from the possibility of achieving the end-point of that trajectory – surely a moving target – it’s the intention that seems most interesting to me, the realisation imaginatively of what we’re trying to do.

    There’s also, however, the horses, or Houyhnhnms, Gulliver meets and discourses with but never, never rides. The trajectory of his longing is to be at one with them, but the self-denial this requires leaves him nowhere, perfectly counterfeit. (Again, see Kenner’s The Counterfeiters.) The Swiftian question, then, is a form of “what’s wrong with this picture?” If the human in identification with the perfect Glatteis of logic and decorum is satirical, ridiculous, spiritually crippled, then what vision of humanity does it illumine? “Back”, Wittgenstein said, “to the rough ground!”

    Robert is quite right about my title, but it does have a history. The first version was, “Whom am I computing?” But that seemed all too pedantic in its grammatical correctness, stuffy, trite. So I wrote, “Who am I computing?” And then I saw the possibility of the comma and bingo! Yet once more, reasoning in language showed the way.

  • Like Chris I value Heidegger’s contribution to our attempts to work out how computing gets involved in thoughtful work. And as Chris more or less said, what matters for me is not just the thrownness but especially the breakdown which Dreyfus emphasizes in Heidegger, and which Polanyi stresses even more in his own writings. When the computational model (which, in its terms, to run at all denotes its perfection) fails, then we theorize and learn. At the beginning of Winograd’s and Flores’ important book, Understanding Computers and Cognition, they write, “in designing tools we are designing ways of being” (p. xi). And I’d say, make no mistake about it; it’s that serious, what we’re doing with computers. Since that’s the case, it means that however a small a facet of the human we may be modelling in software, the inevitable breakdown of the model is a clue to a clearer sight of it, a clearer sight of ourselves. You might say, we’re polishing the computational mirror, except that the polisher is changing in the process.

  • Along the way several people have remarked on the computer as an instrument of art or analogous to a musical instrument. Tamara’s relation of her experience in acting through or with her computer to effect something is like that, and it certainly challenges my sense of opposition. But I think there is a sense of obedience that survives her creative, instrumental playing of the machine. Someone in the Arts & Crafts Movement (I forget who it was) remarked that “within the limits of my craft I have perfect freedom”. These are creative constraints, enabling constraints to be obedient to which is paradoxically not a suppression but a realization of intelligence and individual will.

    But…

    How about the more agonistic practice, the practice that goes up against the limits of what is possible? When pushing the limits, doesn’t the struggle create an opposition, an enemy even, to be defeated? Again I think of Gilgamesh and Enkidu. Again being thrown into a technological practice can be a wonderful and creative experience, but the breakdown teaches, and that’s a sudden realization of something opposite that was not before. When the comfortable idea — ah, the truth one has articulated so elegantly! — becomes intellectually claustrophobic, shouldn’t one rejoice in the liberation?

    This is to describe the cycle of modelling: some idea of the human rendered algorithmic, pressed until it runs aground and by doing so expands what one thought the human was.

  • Thanks to Yorick for “the power that writers have”. In that book I keep rattling on about, The Counterfeiters: An Historical Comedy, Hugh Kenner speaks of a language which theory has separated from its speakers — the belief that Language is “an intricate, self-sufficient machine with which mere speakers should not be allowed to monkey, unless they have first mastered the instruction book” (p. 84). As defense against this belief (which at the time must have been all the rage in linguistics) an English teacher in my ninth-grade class read out some poetry by a Korean teenager who had just learned English. It was ungrammatical to say the least — and wonderful. Now, half a century later, I realise how very hard it is to summon the power that Yorick speaks of, especially against our “close, naked, natural way of speaking”, and how marvellous when, e.g. in Seamus Heaney’s poetry, it takes over.

    In that light it is interesting that when in the early years of computing’s encounter with language the attempts to use the former to produce the latter were greeted with such howls of protest, especially when poetry-writing was the object of the computational exercise. One of the louder ones was F. R. Leavis’, in the pages of the Times Literary Supplement (23 April), in a front-page article entitled “’Literarism’ versus ‘Scientism’: The misconception and the menace” (later reprinted in Nor Shall My Sword, 1972). He tells of encountering “point-blank… the preposterous and ominous claim” of computer-generated poetry. It’s not difficult to imagine a truly preposterous claim, which this claimant may have made; it’s also not difficult to agree with many of the prescient remarks Leavis makes about the steep decline then beginning in British higher education. (O tempora, o mores!) But what interests me in the present context is the fear which such claims stirred up and the deaf ear on which highly intelligent and critically cautious proposals (such as Margaret Masterman’s, also articulated in the pages of the TLS) fell. I wonder – with, I must say, insufficient evidence and hope for argument – if the fear was stirred by a barely hidden suspicion that the 17C Royal Society’s English was about to find its enforcer, so that the human as then conceived would thereafter be struck dumb?

    Again, Leavis, here referring to that claimant as the philosopher she was: “That any cultivated person should want to believe that a computer can write a poem – the significance of the episode, it seemed to me, lay there; for the intention had been naïve and unqualified. It could be that because of the confusion between different forces of the word ‘poem’. And yet the difference is an essential one; the computerial force of ‘poem’ eliminates the essentially human – eliminates human creativity. My philosopher’s assertion, that is, taken seriously, is reductive; it denies that a poem is possible – without actually saying, or recognizing, that. If the word ‘poem’ can be used plausibly in this way – and by ‘plausibly’ I mean so as to be accepted as doing respectable work – so equally can a good many other of the most important words, the basically human words. Asked how a trained philosophic mind in a cultivated person could lend itself to such irresponsibility, I can only reply that the world we live in, the climate, makes it very possible.”

    Leavis reacted to Snow in exactly the same way, as a dark omen (and, in his infamous Richmond Lecture, even jokingly asserted that Lord Snow’s novels had been written by “an electronic brain called Charlie”). But it is abundantly clear from much of the rest of what I have read from the period that the computer had occasioned great waves of anxiety over human identity. We might say the question was, what is the human now? Isn’t that now our question as well?

  • Peter Batke

    Those of us in Willard’s ever young generation came to computing after our training was essentially complete. Thus we have had to take our old kit, lots of canvass and wooden spars and clanging tin pots and go camping in a new world of Gore-Tex, titanium and plastic. We can still contribute generally, since we can dig out wonderfully ironic nuggets from the history of thought, and besides, things have not changed that much at our end of campus. I always love to follow one of Willard’s expeditions into the l8th or 19th century when the future could be intimated, but our current present was still far off. It reminds me of a more comfortable time.

    It is really our current present that is pressing upon us. The sciences are running with it – they are gone; they have long since left the question of moral and physical sciences behind. I mean, who is going to put the chair of the Department of Nano Technology on the rack today for denying that the earth is flat. In their rush to reorganize their research every 14 months, the scientists have, however, dropped things out of their all-carbon-fiber kits, things that we humanists, essentially incapable of reorganizing our own research every half-century, can use to get on with our tedious plodding.

    There are humanists that are more than worried by the current state of the knowledge thing and working frantically to understand it, or at least find some words to describe it. Of course, frantic, worried humanists are nothing new; we have worried about everything, in the most recent distant past, about the knowledge explosion, that little pop that went off in the 60‘s. The most recent electronic issue of D-Lib has an article on the dimensions of the most recent exponential knowledge tsunami [Time Challenges – Challenging Times for Future Information Search, Mestl et. al.]. The ideas offered there to deal with this deluge may well have been intimated earlier, but they push us out of our horizon, literally. Just the mere notion of 2 stacks of books from earth to Pluto next year should make us think. I will not summarize the piece; it is an easy read, but not without things to challenge, and within easy Google reach.

    So I am left wondering: can we look for guidance to the past? Yes emphatically, we can get guidance about love, about death, about raising children, dealing with siblings, parents. We can learn about power, about tragedy, about joy, about illness and death. We can even learn about life after death. We may even be able to learn about being overtaken by events. But where should we look for guidance on computing? In the 40’s? Earlier? With Turing and von Neumann? The answer is emphatically NO! We have to come to grasp with the issue that, failing a disastrous crash (we should be so lucky), the system of information will expand at such a tremendous pace, and the tools to present information will morph so dramatically, that the past and even the visionaries in the past have nothing to tell us, except good luck and God bless. Certainly the things our teachers taught us may give us solace as we gaze at our green pasture, they will not inform the information growth in the present. And as a final point, fantastic global information growth may be a problem for those of us forever young, for the actually young it is the opportunity of a universe.

  • WiWillard,

    The title offers a first person narrative. How does the piece navigate its way to a first person plural?

    The agent of the insertion of a plural “we” is a citation from Northrop Frye.

    As civilization develops, we become more preoccupied with human life, and less conscious of our relation to non-human nature. […] We have to look at the figures of speech a writer uses, his images and symbols, to realize that underneath the complexity of human life that uneasy stare at an alien nature is still haunting us, and the problem of surmounting it is still with us.

    How does one surmount an uneasy stare? The answer may be embedded in some of the textual features that are located in what is traditionally considered the paratext.

    The bibliographic items in the apparatus of the Works Cited, if read by a machine looking for plurals, finds Bruner’s possible castles to begin the list and Vickers’s keepers and players to conclude the list. The educated imagination is located between them. Such are the vagaries of alphabetical lists that such positionings can be read off of them (with a little jump between processing strings to identifying sememes).

    If the stare at the alien is uncovered by looking underneath, can surmounting that stare be achieved by a looking again for a between hovering on the surface, a scanning?

    The title, the citation, the bibliography – the textual mechanics can be regarded as a set of interlocking machines. Reading through a machine “we” become singular. Uncanny if not alien.

  • In Peter’s response to my original posting I am particularly interested from an historical perspective in the fact of feeling both left behind by progressive disciplines and overwhelmed by what we sometimes call knowledge (as he does) but more often information. Let me take the latter first, though I think the two are intimately related.

    From my own experience over the last few decades of involvement in communicating such knowledge/information, I know that the expression of “infoglut” is based as much or more on a qualitative reaction as on a quantitative datum of experience. I’ve come to the conclusion that the problem lies in figuring out how to relate to measurably changed volumes of stuff, as when the number of books in a library reaches the level at which a catalogue is required to find the books quickly and reliably. (This seems to have happened quite early in the history of libraries, as the finding aids in Mesopotamian libraries suggest.) The adjustment to new orders of complexity has happened many times in the past (the beginning of Vannevar Bush’s “As We May Think”, July 1945, is but one recent example out of many), but we’re still at it. This is not to say that we’re slow or stupid – we’re facing a reconstruction of our ways of relating to the world. So we find ourselves repeatedly at the point of inventing new equivalents of that library catalogue and concomitant attitudes and behaviours. Some of us are old enough to have been raised with the notion (outmoded then, but still taught) that in going at a research project one should as a matter of course read everything that had been written about the subject. I doubt anyone even pretends to do that now. But what standard of sufficiency do we have now? What do we teach our doctoral students to do? And looking at what we actually do, how do we square this with the ultimate goals of scholarship? Faced by JSTOR et al. (which is where many students now begin), and so with the spread of a topic across many disciplines, and given the limited time mortality imposes, isn’t there a choice between going wide and going deep? Richard Rorty, discussing Gadamer in “Being that can be understood is language”, has an argument for this situation we’re in that seems to me very important indeed – and one which we haven’t yet taken in. The basic question is, I think, how do we humans already navigate a world in which things smaller than a grain of sand could absorb lifetimes of study? And (to echo both Warren McCulloch and Gregory Bateson), what is a human that he or she does this every day in every way?

    As medicine for the feeling of being left behind by the sciences zooming ahead, I recommend neuroscientist Semir Zeki’s note at the beginning of A Vision of the Brain (1993), quoted by Philip Davis, “Syntax and pathways”, Interdisciplinary Science Reviews 33.4 (2008):

    The study of the brain is still in its infancy and many exciting ideas about it remain to be generated and to be tested. I hope that no one will be deterred from asking new questions and suggesting new experiments simply because they are not specialists in brain studies. Leaving it to the specialist is about the greatest disservice one can render to brain science in its present state…. Perhaps what is needed most in brain studies is the courage to ask questions that may even seem trivial and may therefore inhibit their being asked. . . . You may find that you are making a fool of the specialist, not because he does not have an answer to your question, but because he may not have even realised that there is a question to answer. (ix)

    In the humanities we have many such questions (Davis, an English professor, asks some of them). But rather than think of a race (to what finish-line?) in which the winner leaves the loser behind, how about the old story of the blind men and the elephant from the Pali Buddhist Canon – but, since we’re all blind, without the clear-sighted Buddha? Being a humanist I am inclined to think that we have the hardest problems of all, but I have to admit I’m doing well even to grasp in well-diluted terms what my scientific colleagues are working on these days. And from my experience of being a young proto-physicist as well as a reader of intelligent popularizations, I’d say their problems are both very hard and very, very deep.

    I wonder too about this sense of the humanities losing out to the sciences in epistemological terms. I can understand that we’re losing out socially and culturally, that we’re being squeezed by the janitocracy of senior administration, whose arm is being painfully twisted by government agencies et al., who are in constant fear of being turfed out by a disgruntled public, who are rightly disgusted and angry at the behaviour of the bankers ad nauseam. But I’d not be at all surprised to learn that genuine scientific research is as threatened as genuine research in the humanities by the persistent wave of anti-intellectualism that constantly erodes our shoreline.

    The humanities are concerned with envisioning a life worth living, within which computing now has a role to play, directly or indirectly, for us all. If, as I believe, “in designing tools we are designing ways of being” (Winograd and Flores again), then where else but to the historical disciplines can one look for guidance?

    So let me make a strong statement: there can be no more important question for computing’s overall direction than the question of the human. Computing helps us work on it by giving us a powerful way to model our responses. Work enough for all the world’s humanists for a long time to come – if we but have the wit and the courage to take it up.

  • Francois has a gift for almost reading a text, that is, for remaining aware of the ways in which surface-features of a text condition the reading while it is happening. Most of us attend from the many voices and signs and signals to the argument unfolding as we go. He simultaneously attends to them. Processed by all the textual machinery after being processed through the digital media by which the text is presented, are we then closer together than we were before, if not unified? I’d suppose the answer is yes, and I agree that it is uncanny. It is one form of the head-breaking problem of context: how, for example, the first few words of Ovid’s Metamorphoses set limits to what can happen in the following 12,000 lines and open up worlds within them. How does that happen? “Once – upon – a – time…” and already we know where we are. That is the humanities to explore, the quite mysterious “alternativeness of human possibility” (again Bruner’s wonderful phrase).

  • Peter Batke

    Let me respond to Willard’s response. I had purposefully bowled a googly in hopes of hijacking the discussion away from secret handshakes and obscure references as humanists congratulate themselves for checking their e-mail at Starbucks. I’ll play umpire and declare lbw. For the Americans, I threw Willard a screw ball and I got a long, long fly ball that was caught at the track, or let’s say it curved foul to encourage another swing. I understand the purpose of this forum is to encourage discussion, and the topic is an exploration of solipsism, something my computers and I have never indulged. They understand that they are valued tools; their only humanity is to be an extension of my mind for work and play. I like to concentrate on the work now.

    Despite the human in humanities, humanists are not so much about examining the “human” as about examining texts. Scientists examine the human as well. The novelist may be the postmaster general, in the case of Trollope, but his work is examined in detail by humanists, his work is read by everyone else. Trollope is concentrating on the human; the humanists are comparing him to Thackeray. That does not mean that a given humanists would not like to be something else and may even get away with it. I have been working on Leunclavius, my favorite humanist currently, and I am continually amazed how carefully he worked and how he saw the tasks to be done much as we would see them today. And information was hard to get back then, as was a good doctor.

    Let’s say we all started way back when, with the first sentence of Aristotle’s Metaphysics, and we were happily expanding the human innate impulse for knowledge when dogmatists tried to monopolize the discussion mostly by burning people with good ideas. While successful initially, the empirical perspective would not be denied, except in what became humanities in the 20th century. I think in the 19th century, there was considerable “science” going on in the collection of texts, understanding languages, compiling biographies and chronologies, certainly at German universities. Of course there were also bogus theories, but noone would confuse belletristic writing with humanistic scholarship. I think it was my generation, BA 68 that was allowed, for the first time, to write dissertations on the living or recently deceased. This, and a host of other factors having to do with an experiment in mass education, including the barely educable, lead to a softening of the humanities, and a vast inclusion, all in all a worthy effort. Hats off to American universities. That is not to say that hard scholarship was not going on. But there were just too many people involved, both to be trained as scholars and to be taught as novices for all the rigorous standards of textual scholarship.

    And then came computers. Computers have been very good to us, and I am speaking for the legion of unemployable recent PhD’s (back then) that found rent money and much more in the computer centers of research universities. The spirit of inclusion allowed computing projects to flourish in various orbits because everybody had to learn word processing in a hurry. I think back then the computer was a shadow that crept into happy lives, an exacting demon that would have its way unless it could be tamed. People were having nightmares.

    Computers were also causing scientists nightmares. But the computer was tamed, became the ubiquitous tool that allows us to engage in this forum, both technically in posting our response and belletristically in that we can hang ideas on the concept.

    The question is how we deal with its history. I maintain that the present has brought us to such a pass with computers that we must rethink the work of the humanities. This may not be possible in an environment of budget constraints when one has to be grateful for every student. This is not really interesting, even if true. But information is piling up. This discussion will add a good 20 pages; all of it will have to be indexed by Google so that random wanderers can read what we have written. Another 20 pages next week. The point is that the quantity of information is such that some serious work needs to be done. First, the information generated exceeds the global storage capacity. Second, a third of the information stored is duplicates. Third, non-textual data can be found only through the metadata. And finally, with text data it is possible to find text even when there is no meta-data. I hope I am inspiring some sense of dread, especially by the last point. It is only an expedient to speak of info-glut and to try to build a wall around our disciplines where we can be secure with our real knowledge.

    As an experienced scholar picks up a book in her field, she is aware of the metadata. I have books on my shelf that represent pretty complete set of information or knowledge in a field, and some less complete. I can log on to Hollis et. al. and get some sense of what I may be missing. I can download pdf’s from Google Books and complete my collection. But not everyone is playing by my rules. Google has scanned some millions books and is indexing the questionable OCR to rank pages. That means that books are retrieved not by the metadata that the humanist has internalized in her work, but by snatches of words in an algorithmic indexing scheme. I have thought long and hard about this till I have warmed to the idea. Let us forget about our “secure” knowledge, which will prove not to have been that secure by the next generation anyway, and let’s just think about pages. Let us NOT take the metadata and then internalize the book and then put forth some descriptive analysis based on what we know that we know; let us INSTEAD let the search bring us the pages, without the privileging by the profession. This may be solid post-modern ground.

    I’ll leave this thought and fly off some tangents. It would be best if humanists patched things up with science in their own heads (scientists may not be that concerned). Especially when it comes to computing and humanities we cannot ignore the methods of science. Statistics must be taught to computing humanists as it was taught to social workers in the 60’s and 70’s. If someone does not want to learn statistics they should not do humanities computing, let them do philosophy, or communication.

    We cannot cast aspersion and innuendo at science and feel good with our focus on the human which we would deny – them, the heartless er.. er.. researchers – yet dig around in the history of science for inspiration. We are all in the same race that began with the thought that humans could make sense of their world. If we would look for inspiration let us look at the group that was at the 1964 Yorktown Heights meeting for Literary Data processing. My own favorite is Stephen Parrish, a working scholar who found a valued tool in the computer. To read some of his work go to: http://www.princeton.edu/~batke/lbs/parrish.htm ( pardon the copyright infringement for a good cause, and pardon the site, it is a piece of computing anthropology that I have not touched in years.) Let us NOT look at V. Bush, although I used to be a fan. Here is a man who could not grasp digital. Let us instead rediscover John B. Smith, not his old work on Joyce, but his new work. Let me not go on; in any case, the list of citations should weigh towards the current present and there are many candidates with important work.

    I should also add that I realize that the only real advancement in Computing in the Humanities is to be in a real department (English, History, Computer Science … ), and congratulations to all those who have made that leap. But for the rest, let us keep an eye on the technical issues, which includes theoretical issues and not imitate our non-computing colleagues who do need us, even if the dean does not know it.

    In conclusion I would plead for a focus on the task of text. We are overwhelmed with text. Let us not insulate ourselves in the values we carry from the past and trapse of into the land of belles lettres to craft enigmatic sentences, but let us imagine a world where the applied math data mining people actually can come up with the answers, or some answers. Weirder thing have happened, I think. And that is my swing for the fence; let’s hope it was not a worm-burner.

  • I have a very distant relationship to baseball, so I’m at a bit of a loss to know what game now to play, or how to respond to all the balls flying about. Secret handshakes? I wasn’t aware that any are being given, but perhaps I’m too well trained even to know that I am signalling membership of an self-obsessed group. Solipsism? Perhaps I’m hallucinating so badly, or so well, that I’ve only imagined real exchanges among real people who, like me, are strongly motivated to communicate, even willing to take the sort of risks we can take to get a conversation going and to keep it going. Among the many things I learned from Northrop Frye was that in Paradise, at least as Milton imagined it, conversing is playful. So here we play in the delight of language, as best we can, to manifest, as best we can, what it means to be human. And if that’s normative, then I say that to imagine a life worth living is to imagine a world in which the usual is the normal.

    I think I’ve said here a number of times that what people fear really interests me, especially in the context of my current research into the history of literary computing. There are many expressions of fear running through that history and through this discussion, from the keynote about theft of the humanities and in various statements of exhaustion – the humanities being all played out etc. What does all this tell us? It’s certainly clear from the history of computing in its intersections with the humanities that we have felt and continue to feel our identity (or, less flatteringly, our ego) being challenged fundamentally, as it was by Darwin, Freud et al. Now that we have this (discovery, device), who are we? Is this all there really is? What astonishes me about our history with the machine, again and again, is that we think its purpose is, as Blake said, to put the light of knowledge out, and we get so alarmed by the slivers of light coming through the cracks left by whatever latest ham-fisted solution, version 2.0, we’ve just tried. But don’t worry, our advertising masters (who look suspiciously like our line-managers) say, real soon now version 3.0 will put us to sleep for good. In other words, I say back, the failures are the point. Those fears are signs of walls coming down, telling us how to hasten their destruction.

    But enough for now. Thank you, Peter.

  • To summarize, that is the problem. Reading through the text that has accumulated around my original posting these last two weeks produced the unsurprising impression of variations on the theme. But with Peter Batke’s commentary on our social conformities in mind, I applied in a very rough and ready fashion the standard approach to the analysis of text that I teach my students: gather it all together into an unstructured file, generate a list of word-frequencies, scan that list and group the words morphologically and, as seems to befit the text, conceptually. I grouped the words around those terms that ranked high among those occurring 10 times or more.

    As from reading , the result here is also unsurprising – but suggestive. One thing it suggests is that Peter was right in detecting social conformity.

    Words morphologically related to “human” (“human”, “humanist” etc, including “life”) occurred 209 times. Words related morphologically and conceptually to “computer” (including “machine”, “tool”) 250 times. “Human” was the highest ranked open-class (content) word; “think” and “computing” tied for second place. From this one could draw the conclusion that we talked about the human and the computer. Unsurprising, indeed – but confirmation that we stayed on topic. (Confirmation is very important in text-analysis; if you cannot confirm that what you already know is true, then your method is suspect!)

    Words relating to what we do (actions and products of action), any one of which occurred 10 times or more, I lemmatized. In order of frequency, most to least, they are as follows: say, think, question, work, know, ask, read, make, see, understand, history, research, idea, problem, point. The objects of attention, in frequency order from most to least, were: text, language, literary and words. Again, utterly unsurprising, but perhaps useful as a display analogous to those emblem books of the trades, such as Jost Amman and Hans Sachs, Eygentliche Beschreibung Aller Stände auff Erden [Exact Description of all Ranks on Earth], popularly known as the Ständebuch (1568). Such a display provides inter alia a starting point for explanations of what exactly it is that we as humanists do. The ability to provide such an explanation on the spot can be quite handy. It has become, I’d argue, an urgent necessity.

    Decades of work in text-analysis have shown, however, that the most significant words for prying into the less obvious qualities of a text are the very ones we as readers consciously ignore, those which (I am told) linguists call the “closed-class” words, or more accurately the class of words that a language tends very seldom to coin. These are the articles, particles, pronouns and so forth. In my analysis I went particularly for the pronouns, as they are a good indicator of perspective and audience – and they yield their secrets at least in part by methods short of the stylometric tests that require much specialized training properly to apply and considerable experience to explain.

    Of the pronouns, the most frequent, and very high on the frequency list (9th ranked, occurring 233 times, or 1.234% of the corpus), was “I”, followed immediately by “we” (218 times), then next but one, “it” (181). All together, lemmatized, the first-person pronouns occurred 597 times, the second-person 41, the third-person 337 (his, 37; her, 35; it, 212; they, 63). One would be justified in concluding that here is a text in which people write mostly about themselves, what they do and so forth, and address others in the group.

    Going well beyond the numbers, I also carry away from many discussions concerning the humanities – including the one in which I’m about to participate, at Reed College, “Are the Humanities Still Relevant?” (www.reed.edu/alumni/reunions/alumni_college.html) – a strong sense of a beleaguered minority hunkered down against a coming invasion. Evidence for a society against us, or indifferent to us, is not hard to find. Reasons why the collective behaviour of the humanities should merit such a reaction are likewise not hard to find. But the very real threat from within our institutions by those sympathetic to or in fear of extramural antipathies needs to be met by a response that at base is rooted in the ideals of the humanities – as long as, that is, our institutions remain educational. Questioning the human is what we do, what we have always done. To do that we need to stand apart (no wonder the appearance of solipsism) while regarding critically what is going on in the world (so that solipsism remains only a countable appearance).

    The great ethno-historian Greg Dening used to say that one should be able to walk away from a book one has read with a single sentence or two in one’s head. These two weeks have not produced a whole book (though more than one is latent here). But for this my sentence is, “Now that we have made this device, who are we?”

    My thanks to all who participated. Well done!

  • Well done indeed, and thanks to everyone who participated by writing in or reading along. The conversation continues in our Facebook group, http://www.facebook.com/group.php?gid=52472677549

  • Geoffrey asks, “What does the telescope tell us about the human if it is aimed at the stars?” Margaret Masterman spoke of a “telescope of the mind” for our text-analytic tools. What do modern astronomers think they are doing – in human terms, that is? There’s the quest for origins in the Big Bang; is this cosmology recapitulating ontogeny in the human scientific imagination?

    What do the very few scholars who still try to peer through the poor approximation we have of Masterman’s metaphorical telescope think they’re doing? “In the beginning was the word”? But what they actually do with something much more like a microscope (without considering Ian Hacking’s question, “do we see through a microscope?”) reveals that the spirit which gives life to the text is elsewhere. We say it is in the always expanding promissory note called “context” but don’t really have any idea what we mean by that.

    The text-analytic project that has limped along for so long was, Wittig pointed out, founded on the notion that the alphanumeric data-stream abstracted from the marks on the page, ignoring the page and everything else of design, physicality, provenance, cultural significance etc. would yield positive knowledge somehow related to, somehow correcting fallible human readings. A just-the-facts-m’am approach to literary scholarship made credible by the new machine. The marvel is that in fact through statistical procedures some of that positive knowledge is emerging. But what we’re not paying attention to is the setting in which this happens and how it happens. The best writings in this genre demonstrate the process, the slow, careful probing, the initial trials against what is already securely known, the experiments now in this, now in that direction, always with reference to readerly knowledge, the trying out of now this test, now that one. It happens, that is, in an intimate interrelation of a scholarly reader, a literary text and evolving statistical methods – I would dare to say a resonance among these elements.

    There are, I think, two problems here. One is the setting aside of the human for the security hoped for in the undisputable data (which, of course, involves a highly constricted choice of what to consider as data). The other is the old model of batch-computing, long outgrown but still affecting what we think we’re doing. The basic question is the one Milic asked in 1966: what is the next step? How do we involve the human explicitly? Or, as was formerly asked, how do we grow our idea of the human to include the machine (which, again, is a human invention)?

  • I think of Joseph Tabbi typing on his notebook, “accessing the Internet through a wifi connection at a cafe several thousand miles from where I live, with a limited time before my battery runs out, for reading and response”. Thank you, Joseph!

    Such happenings are now so commonplace as to be unremarkable – except for the fact that however commonplace, they are what is happening and so constitute part of what we need to understand of how human discourse, here academic human discourse, is reforming around new circumstances. One consequence is the greater informality of this particular interchange and many, many others like it – in particular its conversational qualities. I’ve been writing in an evolving academic mode in this medium for the last ca. 23 years, making what I write public (etymologically, publishing it) across the Internet to an audience so diverse as to be verging on Everyman. Well, ok, academic Everyman. What I’ve observed and pushed is the greatly increased opportunities for using language much more riskily to venture ideas and see what happens to them rather than to state suitably bullet-proofed arguments. In other words, to engage in scholarly interchange not only less formal (rhetorically more malleable) but also at a pace verging on the interactive. It is not much of an exaggeration to say that commonly I turn from my writing to my favourite Internet discussion group to ask, in effect, what do you think of this? Sometimes I say something I know is not exactly true but suspect will act as provocation to unearth something I cannot quite get to. To what extent do others do this? I don’t know. But I observe that it is happening, that it works marvelously well and so mention it here.

    I mention this conversationalizing of academic discourse in the humanities (imitating somewhat the social sciences and e.g. computer science) because of the dynamic of engagement and the rapidly increased pace at which we are in this Forum working on the question of the human, meanwhile helping to change it. But what about the tradeoffs, i.e. what we are giving up for what I see as the benefits of being thus?