In a panel discussion at the 1998 “Bookends” conference at SUNY Albany, Jacques Derrida spoke of Internet initiatives under way by his younger colleagues in France at the time. The first thing they would do, he said, is set up editorial boards, appoint in-house grant writers, and establish closed review processes – effectively replicating the foundations of recent print practice in the new media.
At the time, I may have allowed myself a moment of self-congratulation since I had done no such thing in the journal I’d founded four years previously, www.electronicbookreview.com [ebr], a DIY literary critical enterprise. Today, I’m not so confident; and I recognize how many institutional inheritances I depended on, not least the collegiality that let me work with and attract contributions from scholars like myself, and also programmers and designers interested in print/screen transformations. Certainly the Internet prior to the Telecommunications Act of 1996 was less dominated by commercial content and instrumental constraints, and it was easy enough to take advantage of a medium created, by scientists, for purposes of freely sharing documents among colleagues working at a distance from one another.
I won’t address here the commercialization of content on the Internet and the neoliberal capitalist turn in academic life. (That was critiqued, as it was happening, notably by Marc Bousquet and Katherine Wills in their ebr thread, Technocapitalism, and the accompanying Alt-X Critical E-Book, The Politics of Informatics .) Instead, I want to look at the way institutional practices can be transformed, if Humanities scholars can make consistent, collaborative, and (not least) frugal use of the affordances of network technology in gathering literary works and forming conversations around them. My recent experience reading 300 works of electronic literature for preservation on the Wayback machine at archiveit.org, an initiative co-sponsored by the Electronic Literature Organization and the Library of Congress, suggests that the oft-noted “obsolescence” of works published in perpetually “new” media is an institutional and cognitive problem as much as a technological challenge. Capturing works on the Internet at stages of their development is technologically feasible. What is hard is finding the works worth preserving, defining their literary qualities, and establishing incentives for readers to go back, for more.
Whatever transformations the Humanities undergo in new media, a condition for the field’s possibility has to be the ability to re-read, and the freedom to cite, the work of peers and precursors. This is the task of editorial boards and granting institutions in online environments: those who vet works need to look at, not only what an author has accomplished, but at what the work might become, as it circulates among other works and, over time, collects comments from readers as well as its initial peer reviewers. In print, the credentialing process ends when the contract is signed; in e-media, the work is vetted continuously (or could be) and lives or dies depending on the readings it attracts, the re-writings it inspires, and how these are presented. That trail of commentary, not number of objects sold, constitutes popularity and presence in electronic environments (though business models will need to emerge, in university presses or other not exclusively commercial enterprises that can recognize this processual aspect of the digital literary object).
The conditions that matter to current research in the Humanities are constrained (though one hopes not pre-determined) by the collective creation of a “Semantic Web” or “Web 2.0” under the direction of Tim Berners Lee, the original architect of the World Wide Web. To some extent, this development answers the need to locate, identify, and re-circulate, information produced and found in electronic environments. The ability to “tag” material conceptually, rather than to search character strings, is of course attractive – one might even say, seductive – for literary scholars. There is first of all the promise, roundly critiqued by Florian Cramer, that “semantic technology” can “allow people to phrase search terms as normal questions, thus giving computer illiterates easier access to the Internet.” Easier access (what Alan Liu more broadly critiques under the phrase, “user friendliness”) generally means a more complete cluelessness about what is actually being searched at the level of code because, at this level, there can be no “semantic language understanding.” As Cramer points out in his “Critique,” that grail has eluded Artificial Intelligence researchers for decades.
Neither should humanists hope to realize, through networked computers, the Aristotelian dream of universally valid categorizations. The so-called “ontologies” that computer scientists create are not ontological in the philosophical sense (no more than, say, the glossy “literature” promoting a product or company has anything to do with novels, poems, or essays). Semantic Web [SW] “ontologies” are not ways of being, but rather ways of sorting and selecting. Cramer calls them “cosmologies,” in the sense of Borges’s “Chinese encyclopaedia”:
In its remote pages it is written that the animals are divided into: (a) belonging to the emperor, (b) embalmed, (c) tame, (d) sucking pigs, (e) sirens, (f) fabulous, (g) stray dogs, (h) included in the present classification, (i) frenzied, (j) innumerable, (k) drawn with a very fine camelhair brush, (l) et cetera, (m) having just broken the water pitcher, (n) that from a long way off look like flies.
Cosmological systems, while good at proliferating differences, remain indifferent to meanings and ambiguities that, whether or not they are recognized in the world, certainly characterize literary work in any medium.
From a literary viewpoint, more promising than the SW’s knowledge management pretensions are the really quite modest methods of tagging texts. Assigning meaning at a second order, apart from the actual text encoding, depends entirely on people and the professional and communicative networks we establish. The creation of professional standards is essential. Otherwise, semantic markup will only redouble the work needed to create the text proper, and we’ll have built, in Cramer’s words, “a library whose catalogs outnumber the books they reference.”
That can be avoided, I think, if humanists approach the tagging of works as a properly critical practice, developing with works a glossary of keywords and literary concepts. Doing this means that the ones vetting the works should be expected to ground their judgments in professional standards. The burden of understanding is placed on those who presume to evaluate the work, and that understanding needs to circulate with the work proper.
An evolving glossary of electronic literary terms (to echo M.H Abrams’s long established Glossary of Literary Terms in print) has to be applied to works consistently and with an awareness of tag clouds forming throughout the Internet (at www.rhizome.org, for example, and at the newly released Electronic Literature Directory, version 2.0). Moreover, the terms will have to change as the kind of work produced in electronic environments change, and these changes can be tracked. What scholars can then construct is not so much a universal set of categories defining “electronic literature,” “net literature,” or “digital or online literature,” but rather a practice capable of producing a poetics, as e-lit author and critical theorist Sandy Baldwin has recently argued. The novelist Robert Coover made a similar argument when he reported his experience “On Reading 300 Novels” for the 1984 PEN/Faulkner Awards – before the Internet existed, and roughly a decade before Coover himself would move decisively into electronic environments (without ever ceasing to write books). While conducting my own summer e-lit reading, I’ve kept Coover’s essay in mind, especially the criteria he used to distinguish settled and emerging genres.
Coover found that a majority of submissions were “serious” or “priestly” works praised by critics “for their vision, style, commitment, sensibility, honesty, their ‘intense realism.’” These are high-minded works that seek to “transcend mere storytelling, to reach past entertainment for its own sake into speculative and morally perplexing realms….” (38) Many such works are the product of Creative Writing programs, and they can seem “somewhat homogenized” despite the conscious search for a personal “voice.” A few best sellers, mostly formulaic exercises in established genres, also made it to Coover’s desk (and overflowed to his floor).
Ultimately, Coover supposes, “both of these voices are conservative. Both tend, through formal acquiescence, to extend the reign of the cultural establishment, even while challenging it now and then on the surface” (38). But occasionally, “rarely,”
a third voice arises, radically at odds with the priestly and folk traditions alike, though often finding its materials in the latter and sharing with it a basic distrust of the establishment view of things. This voice typically rejects mere modifications in the evolving group mythos, further surface variations on sanctioned themes, and attacks instead the supporting structures themselves, the homologous forms. Whereupon something new enters the world – at least the world of literature, if not always the community beyond. (38)
In a later, more famous essay, “The End of Books,” Coover would carry his investigations beyond “the supporting structures” of genre, psychology, and cultural anthropology to the medial infrastructure itself. In electronic archives, both structures, the generic and the medial, are capable of being transformed concurrently – and those transformations (as much as the work itself) are what need to be tagged and traced.
I don’t expect to find, in my sample of 300 e-lit works anything resembling The Public Burning or even many works in the mode of Coover’s proto-hypertext, “The Babysitter.” Only rarely have I applied the keywords, “Postmodern,” “Experimental,” “Fiction,” “Poem,” and even “Narrative” to electronic literature post-Web 1.0. A bifurcated story by Milorad Pavic on www.wordcircuits.com; a Pynchonesque presentation of the first Iraq War by Stuart Moulthrop (www.eastgate.com); the rude road trips, real and imagined, by Rob Wittig and the collective of “Unknown” writers (robwit.net/robwit/archives; unknownhypertext.com); Mark Amerika’s phon:e:me (phoneme.walkerart.org): these e-lit classics continued the deconstruction that the earlier generation of postmodern authors began. Or rather, one might now say, the first generation of e-lit works completed that deconstructive task, building the digital “Bookend” my conference colleagues and I discussed in Albany ten years ago.
What is being produced at this moment in a Web 2.0 environment cannot be categorized or neatly summarized, though it’s safe to say that the “narrative” and “semantic language understanding,” where found, will co-exist with codes and other non-verbal contexts, much as “the human” nowadays is bounded everywhere by non-human forces and objects. These may have been present always, but their mechanisms are coming to consciousness more readily, and more often. What I’m reading, for the most part, doesn’t often differentiate between “critical” and “creative” writing; the most prolific e-lit authors are also programmers and designers who seem to be as comfortable conversing with scientists and technologists as with other writers. Chances are, the history of the current era won’t appear (like Coover’s articles) in The New York Times or in traditional peer reviewed academic press books. That should not be a cause for gloating, among those who advocated early for a literary migration to new media, or cause for complacency among those who expect the printed book to outlast its new media upstarts. It is rather a call for constructing models of reception and commentary within the medium of our practice, and without too much worry about current disciplinary arrangements.
- Baldwin, Sandy. “Against Digital Poetics (Title in Process)” www.electronicbookreview.com/thread/electropoetics/against (under p2p review, July 2008).
- Coover, Robert. “On Reading 300 American Novels.” The New York Times Book Review (March 18, 1984): 1, 37-8.
- —. “The End of Books” The New York Times Book Review (June 21, 1992).
- Cramer, Florian. “Critique of the ‘Semantic Web.’ Nettime. Tue, 18 Dec 2007.
[A rough transcript of a lecture manuscript written for the "Quaero Forum" on the politics and culture of search engines at Jan van Eyck Academy, Maastricht, September 2007.]
- Liu, Alan. The Laws of Cool Knowledge Work and the Culture of Information. Chicago: University of Chicago Press, 2004.
- Tabbi, Joseph. “Locating the Literary in New Media.” Contemporary Literature (Summer 2008). Online at http://www.electronicbookreview.com/thread/criticalecologies/interpretive.
- —. “The Processual Page.” In The Future of the Page, ed. Peter Stoicheff and Andrew Taylor. Toronto: University of Toronto Press, 2004. Online at New Media and Culture: http://www.ibiblio.org/nmediac/fall2003/processual.html
- —. “Setting a Direction for the Electronic Literature Organization’s Directory: Toward a Semantic Literary Web.” http://eliterature.org/pad/slw.html