An excerpt from

My Mother Was a Computer

Digital Subjects and Literary Texts

N. Katherine Hayles

From Print to Electronic Texts

In "The Don Quixote of Pierre Menard," Borges uses his technique of reviewing nonexistent books to explain Pierre Menard's fantastic project of re-creating Don Quixote in the twentieth century. Although Menard's creation reproduces Cervantes' masterpiece word for word, Borges explains that it is an utterly different work, for the changed cultural context makes thoughts that were banal for Cervantes virtually unthinkable for a twentieth-century intellectual. Borges's mock-serious fantasy recalls more mundane operations carried out every day around the globe. Suppose Don Quixote is transported not into a new time but a new medium, and that the word sequences on the computer screen are identical to Cervantes' original print edition. Is this electronic version the same work? Subversive as Borges's fiction, the question threatens to expose major fault lines running through our contemporary ideas of textuality.

To explore these complexities, I propose to regard the transformation of a print document into an electronic text as a form of translation—"media translation"—which is inevitably also an act of interpretation. In invoking the trope of translation, I follow the lead of Dene Grigar. As she observes, the adage that something is gained as well as lost in translation applies with special force to print documents that are imported to the Web. The challenge is to specify, rigorously and precisely, what these gains and losses entail and especially what they reveal about presuppositions underlying reading and writing. My claim is that they show that our notions of textuality are shot through with assumptions specific to print, although they have not been generally recognized as such. The advent of electronic textuality presents us with an unparalleled opportunity to reformulate fundamental ideas about texts and, in the process, to see print as well as electronic texts with fresh eyes. For theory, this is the "something gained" that media translation can offer. It is a gift we cannot afford to refuse.

The issues can be illustrated by the William Blake Archive, a magnificent Web site designed by three of our most distinguished Blake scholars and editors. It is no exaggeration to say that the William Blake Archive establishes the gold standard for literary Web sites. The site is informed throughout by an enlightened editorial policy, for the editors state that they take the "work" to be the book considered as a unique physical object. They thus declare implicitly their allegiance to an idea that Jerome McGann, among others, has been championing: the physical characteristics of a text—page size, font, gutters, leading, and so on—are "bibliographic codes," signifying components that should be considered along with linguistic codes. The editors make canny use of the computer's simulation powers to render the screen display as much like the printed book as possible. They provide a calibration applet that lets users set screen resolution so the original page dimensions can be reproduced. They include a graphical help section that uses illustrations of pages to indicate the site's functionalities and capabilities. Clearly an enormous amount of thought, time, and money has gone into the construction of this site.

The editors of the archive are meticulous in insisting that even small differences in materiality potentially affect meaning, so they have gone to a great deal of trouble to compile not only different works but extant copies of the same work. Yet these copies are visually rendered on screen with a technology that differs far more in its materiality from print than the print copies do from one another. The computer accurately simulates print documents precisely because it is completely unlike print in its architecture and functioning. The simulation of visual accuracy, which joins facsimile and other editions in rescuing Blake from text-only versions that suppress the crucial visual dimensions of his work, is nevertheless achieved at the cost of cybernetic difference. Consider, for example, the navigation functionality that allows the user to juxtapose many images on screen to compare different copies and versions of a work. To achieve a comparable (though not identical) effect with print—if it could be done at all—would require access to rare books rooms, a great deal of page turning, and constant shifting of physical artifacts. A moment's thought suffices to show that changing the navigational apparatus of a work changes the work. Translating the words on a scroll into a codex book, for example, radically alters how a reader encounters the work; by changing how the work means, such a move alters what it means. One of the insights electronic textuality makes inescapably clear is that navigational functionalities are not merely ways to access the work but part of a work's signifying structure. An encyclopedia signifies differently than does a realistic novel in part because its navigational functionalities anticipate and structure different reading patterns (a clash of conventions that Milorad Pavic has great fun exploiting in Dictionary of the Khazars: A Lexicon Novel).

In terms of the William Blake Archive, we might reasonably ask: if slight color variations affect meaning, how much more does the reader's navigation of the complex functionalities of this site affect what the texts signify? Of course, the editors recognize that what they are doing is simulating, not reproducing, print texts. One can imagine the countless editorial meetings they must have attended to create the site's sophisticated design and functionalities; surely they know better than anyone the extensive differences between the print and electronic Blake. Nevertheless, they make the rhetorical choice to downplay these differences. For example, there is a section explaining that dynamic data arrays are used to generate the screen displays, but there is little or no theoretical exploration of what it means to read an electronic text produced in this fashion rather than the print original. Great attention is paid to the relation of meaning to linguistic and bibliographic codes and almost none to the relation of meaning to digital codes. Matthew Kirschenbaum's call for a thorough rethinking of the "materiality of first generation objects" in electronic media is very much to the point. Calling for a closer relationship between electronic textuality (focusing on digital work) and textual studies (traditionally focused on print), he lays out a framework for discussing electronic texts in bibliographic terms, including the nomenclature "layer, version, and release"; "object"; "state"; "instance"; and "copy." As his argument makes clear, electronic texts often have complex bibliographic histories that materially affect meaning, to say nothing of differences between print and electronic instantiations of a work. Concentrating only on how the material differences of print texts affect meaning, as does the William Blake Archive, is like feeling slight texture differences on an elephant's tail while ignoring the ways in which the tail differs from the rest of the elephant.

What Is a Text?

Tackling the whole elephant requires rethinking the nature of textuality, starting with a basic question: what is a text? In "Forming the Text, Performing the Work," Anna Gunder, in an effort to clarify the relations between electronic and print media, has undertaken a meticulous survey of textual criticism to determine how editors employ the foundational terminology of "work," "text," and "document" in the context of print bibliographic studies. A work is an "abstract artistic entity," the ideal construction toward which textual editors move by collating different editions and copies to arrive at their best guess for what the artistic creation should be. It is important to note that the work is ideal not in a Platonic sense, however, for it is understood to be the result of editorial assumptions that are subject to negotiation, challenge, community norms, and cultural presuppositions. (Jerome McGann's attacks on the principle of defining the work through an author's "final intentions" is a case in point.) Next down the scale comes the text. Gunder points out that the "work as such can never be accessed but through some kind of text, that is, through the specific sign system designated to manifest a particular work." Texts, then, are abstract entities from which editors strive to excavate the work. In this respect, she notes, texts of poems are unlike paintings. Whereas no one would claim it makes sense to talk about a painting separate from the substrate in which it is embodied, editors presume that it does make sense to talk about a text as something separate from its physical embodiment in an artifact. Only when we arrive at the lowest level of the textual hierarchy, the document, is the physical artifact seen as merging with the sign system as an abstract representation.

Gunder's analysis is consistent with the terminological practices of Peter Shillingsburg, one of the editors she surveys. In Scholarly Editing in the Computer Age, Shillingsburg defines a text as "the actual order of words and punctuation as contained in any one physical form, such as a manuscript, proof or book." To forestall misunderstanding, he clarifies that "a text (the order of words and punctuation) has no substantial or material existence, since it is not restricted by time and space.…The text is contained and stabilized by the physical form but is not the physical form itself." Driving the nail farther into this terminological coffin, he insists "it is possible for the same text to be stored in a set of alphabetic signs, a set of Braille signs, a set of electronic signals on a computer tape, and a set of magnetic impulses on a tape recorder. Therefore, it is not accurate to say that the text and the signs or storage medium are the same. If the text is stored accurately on a second storage medium, the text remains the same though the signs for it are different. Each accurate copy contains the same text; inaccurate or otherwise variant copies contain new texts" (emphasis added). Some hundred pages later, he admits that "proponents of the bibliographic orientation have demonstrated beyond argument, I believe, that the appearance of books signifies a range of important meanings to their users;" but apparently he does not think this imbrication of physical form with meaning requires a different notion of textuality. To be fair to Shillingsburg, he has since defined "text" as a compound of matter, concept, and action. Nevertheless, there are no doubt many editors and literary scholars—I dare say the majority—who assume much the same definitions of "work," "text," and "document" that he formulates. Moreover, Shillingsburg's more nuanced explanations of "text" and "work" in his recent analysis result in an alarming proliferation of terms, so that "work," "text," and "version" all split into multiple subcategories. This scheme is reminiscent of the Ptolemaic model of the universe as it piled epicycles upon cycles in an effort to keep the earth at the center of the universe. The problem with the Ptolemaic universe was not that it could not account for celestial motion; rather, it was the cost of increasing complexity required for its earth-centric view. Perhaps it is time for a Copernican revolution in our thinking about textuality, a revolution achieved by going back and rethinking fundamental assumptions.

We can begin this reassessment by noticing how Shillingsburg's definitions are perfectly crafted to trivialize differences between print and electronic media and to insulate "text" and even more so "work" from being significantly affected by the specificities of media. To return to his examples, he claims that a Braille version of a novel is the same text as a print version, yet the sensory input of the two forms is entirely different. Moreover, it is clear that one medium—print—provides the baseline for the definitions, even though they are postulated as including other media as well. Thinking of the text as "the order of words and punctuations" is as print-centric a definition as I can imagine, for it comes straight out of the printer's shop and the lineation of type as the means of production for the book. We can see how Shillingsburg imports this print-centric notion into electronic media when he refers to "computer tape" in the quotation above, for this construction unconsciously carries over the notion that the text resides at one physical location, even though it is at the same time alleged to be "not restricted by time and space." When a text is generated in an electronic environment, the data files may reside on a server hundreds of miles distant from the user's local computer. Moreover, in cases where text is dynamically assembled on the fly, the text as "the actual order of words and punctuation" does not exist as such in these data files. Indeed, it does not exist as an artifact at all. Rather, it comes into existence as a process that includes the data files, the programs that call these files, and the hardware on which the programs run, as well as the optical fibers, connections, switching algorithms, and other devices necessary to route the text from one networked computer to another.

An even more serious objection to Shillingsburg's definition is its implicit assumption that "text" does not include such qualities as color, font size and shape, and page placement, not to mention such electronic-specific effects as animation, mouseovers, instantaneous linking, and so on. In most contemporary electronic literature, screen design, graphics, multiple layers, color, and animation, among other signifying components, are essential to the work's effects. Focusing only on "the actual order of words and punctuation" would be as inadequate as insisting that painting consists only of shapes, ruling out of bounds such things as color, texture, composition, and perspective. The largely unexamined assumption here is that ideas about textuality forged in a print environment can be carried over wholesale to the screen without rethinking how things change with electronic text, as if "text" were an inert, nonreactive substance that can be poured from container to container without affecting its essential nature.

Moreover, the comparison with electronic text reveals by implication how limited this definition of "text" is even for print media. Although Shillingsburg gives a nod to those of the "bibliographic orientation," he does not begin to deal in a serious way with Jerome McGann's brilliant readings of poets ranging from Lord Byron to Wallace Stevens and with his repeated demonstrations that bibliographic effects are crucial in setting up meaning play within the texts. To exclude these effects from the meaning of "text" is to impoverish criticism by cutting it off from resources used to create artistic works. How can one find these effects in a text if "text" has been defined so as to exclude them? Although Shillingsburg's definition of "work" may not be Platonic in an ideal sense, there is nevertheless a Platonic yearning on display in his definitions, for he seeks to protect the "work" from the noisiness of an embodied world—but this very noise may be the froth from which artistic effects emerge.

The desire to suppress unruliness and multiplicity in order to converge on an ideal "work" is deeply embedded in textual criticism. However the criteria facilitating this convergence are defined, textual editors have largely agreed that convergence is the ideal. Hans Zeller, arguing in 1975 for a shift of the editorial perspective from the author's "final intentions" to a broader historical viewpoint, observes that "the editor searches in the transmitted text for the one authentic text, in comparison with which all else will be a textual corruption." Not arriving at a single authoritative text, editors argue, risks plopping the reader into a rat's nest of complexly interrelated variants, thus foisting onto her the Sisyphean labor of sorting through the mess and arriving at a sensible text that most readers would prefer to have handed to them. In this view, readers want a text they can take more or less at face value so that they can get on with the work of interpreting its meaning and explicating its artistic strategies. Here the comparison of editing with translation is especially apt, for the editor, like the translator, makes innumerable decisions that can never be fully covered by an explicit statement of principles. As McGann points out, these decisions inevitably function as interpretations, for they literally construct the text in ways that foreground some interpretive possibilities and suppress others.

When texts are translated into electronic environments, the attempt to define a work as an immaterial verbal construct, already problematic for print, opens a Pandora's box of additional complexities and contradictions, which can be illustrated by debates within the community formulating the Text Encoding Initiative (TEI). The idea of TEI was to arrive at principles for coding print documents into electronic form that would preserve their essential features and, moreover, allow them to appear more or less the same in complex networked environments, regardless of platform, browser, and so on. To this end, the community (or rather, an influential contingent) arrived at the well-known principle of OHCO, the idea that a text can be encoded as an ordered hierarchy of content objects. As Allen Renear points out in his seminal analysis of this process, the importation of print into digital media requires implicit decisions about what a text is. Expanding on this point, Mats Dahlström, following Michael Sperberg-McGueen, observes that the markup of a text is "a theory of this text, and a general markup language is a general theory or conception of text."

With respect to the general theory of OHCO, Renear identifies three distinct positions within the text encoding community, which correspond roughly to three historical stages. The first stage held that a text consists of a hierarchical set of content objects such as chapters, sections, subsections, paragraphs, and sentences. This view asserted that the hierarchy is essential to the production of the text and so must occupy center stage in transforming print text into digital code. This belief in hierarchy informed how the community used SGML (Standard Generalized Markup Language) to create protocols and standards that would ensure that the content objects were reproduced in digital media, and moreover reproduced in the same hierarchy as print. Although most of these researchers thought of themselves as practitioners rather than theorists, their decisions, as Renear points out, constituted a de facto theory of textuality that was reinforced by their tacit assumption that the "Platonic reality" of a text really is its existence as an ordered hierarchy of content objects.

The next stage, which Renear identifies as pluralism, was propelled by the realization that many texts consist of not just one hierarchy but several interpenetrating hierarchies; the standard example is a verse drama, which can be parsed as sentences and metrical lines. Epistemologically, this realization led to a view of texts as systems of ordered hierarchies, and refinements such as Document Type Definitions (DTDs) were designed to introduce more flexibility into the system. The third stage, which Renear calls antirealism, draws the conclusion that the text does not preexist encoding as a stable ontological object but is brought into existence through implicit assumptions actualized through encoding procedures. Renear quotes Alois Pichler as exemplifying this approach: "Our aim in transcription is not to represent as accurately as possible the originals, but rather to prepare from the original another text so as to serve as accurately as possible certain interests in the text." Renear, who identifies himself as a pluralist, astutely points out the tautologies and ambiguities in the antirealist position—for example, indeterminacies in identifying which "certain interests in the text" are to be served.

My interest in this controversy points in a different direction, for what strikes me is the extent to which all three positions—Platonist, pluralist, and antirealist—focus almost exclusively on linguistic codes, a focus that allows them to leave the document as a physical artifact out of consideration. I can illustrate the implications of this erasure by returning to the William Blake Archive. The editors of the archive, as we have seen, take into account the book as a physical object. Their encoding practices make clear, however, that they implicitly understand the bibliographic almost exclusively in terms of the visual. Other aspects of the text as physical object, such as the lovely feeling of a leather binding or the musty smell of old paper, are not reproduced in digital codes. To undertake the complete bibliographic coding of a book into digital media would be to imagine the digital equivalent of Borges's Library of Babel, for it would have to include an unimaginable number of codes accounting for the staggering multiplicity of ways in which we process books as sensory phenomena. To reduce this impossible endeavor to manageable proportions, editors must identify some features of particular interest, and it makes excellent sense to emphasize the visual aspect of Blake's works. But we lose important insights if we naturalize this process and allow ourselves the illusion that Blake's books—or any books, for that matter—have been faithfully reproduced within digital media. Rather, choices have been made about which aspects of the book to encode, and these choices are heavily weighted toward the linguistic rather than the bibliographic. Moreover, the choices have further implications in the correlations they establish between linguistic, bibliographic, and digital codes. Thus in his rigorous analysis of how markup languages such as SGML relate to the Hjelmslevian distinction between content and expression (the physical instantiation of a text), Dino Buzzetti shows that these languages do not solve the problems raised by thinking of the text as an abstract entity; rather, they amplify implicit problems and further complicate the situation. Only if we attend to the interrelations of linguistic, bibliographic, and digital codes can we grasp the full implications of the transformations books undergo when they are translated into a digital medium.

The debates about encoding assume implicitly that there is some textual essence that can be transported from print to digital media. Even the antirealist position assumes an essence, although now it is an essence created by an editor. All three positions elide from electronic texts the materiality of books and their physical differences. A more accurate perception would focus on the editorial process of choice, which is always contextual and driven by "certain interests," although these reside not exclusively in the text but in the conjunction of text, editorial process, and cultural context. In my view, the ontology card is not worth playing. There is no Platonic reality of texts. There are only physical objects such as books and computers, foci of attention, and codes that entrain attention and organize material operations. Since no print books can be completely encoded into digital media, we should think about correspondences rather than ontologies, entraining processes rather than isolated objects, and codes moving in coordinated fashion across representational media rather than mapping one object onto another.

The issue goes to the heart of what we think a text is, and at the heart of the heart is the belief that "work" and "text" are immaterial constructions independent of the substrates in which they are instantiated. We urgently need to rethink this assumption, for as long as it remains intact, efforts to account for the specificities of print and electronic media will be hamstrung. Without nuanced analyses of the differences and similarities of print and electronic media, we will fail to grasp the fuller significance of the momentous changes underway as the Age of Print draws to a close and print—as robust, versatile, and complex as ever—takes its place in the dynamic media ecology of the twenty-first century. For an appreciation of these changes we will require a more workable sense of materiality than has traditionally accompanied theories of textuality, which invoke it only to dismiss it as something to be left behind through the labor of creating the ideal work.

Physicality, Materiality, and Embodied Textuality

There are, of course, good reasons why editors have sought to separate the idea of the work from its physical instantiation. If the "work" is instantiated in its physical form, then every edition would produce, by definition, another "work," and textual form would never be stable. Whether textual form should be stabilized is a question at the center of Jerome McGann's "experiments in failure," which he discusses in Radiant Textuality. As both Mats Dahlström and McGann point out, the two imperatives guiding most textual criticism are, if not contradictory, at least in tension with one another: editors want to converge on the ideal work and at the same time provide readers as much information as possible about textual variants. The Web promises to allow these dual imperatives to be more successfully integrated than ever before, as the William Blake Archive and McGann's work on the D. G. Rossetti Hypermedia Archive demonstrate. At the same time, perhaps ironically, the Web's remarkable flexibility and radically different instantiation of textuality also draw into question whether it is possible or desirable to converge on an ideal "work" at all. Educated by his work with the D. G. Rossetti Hypermedia Archive, McGann argues against convergence as a critical and theoretical principle, attempting to show through cogent readings of poetic works and other strategies that a text is never identical with itself.

Instead he argues for the practice of what he calls "deformation," a mode of reading that seeks to liberate from the text the strategies by which it goes in search of meaning. Following the ideas of Galvano della Volpa, an Italian critic writing in the 1960s, McGann argues that meaning is not the goal of critical explication but a residue left over after critical interrogation is finished. Meaning itself cannot be the goal of critical explication, for "this would run the risk of suggesting that interpretation can be adequate to poiesis. It cannot." Indeed, explication cannot be adequate even to its own understanding of itself, which can be accomplished only through an explication of the explication, which in turn requires another explication to try to get at the residue left over when these two explications are compared, and so on to infinity or to the exhaustion of the critical will. Underlying this argument is an implicit analogy. Just as textual criticism has traditionally tried to converge on an ideal work, so hermeneutical criticism has tried to converge on an ideal meaning. Echoing deconstructive theory more than he acknowledges, McGann asks what would happen if both kinds of enterprise were to abandon the movement toward convergence and were to try instead to liberate the multiplicities of texts through a series of deformations. Thus he is more interested (at least theoretically) in what deformations of Rossetti's images in Photoshop reveal about their composition than in the accomplishments of the William Blake Archive in simulating the color tones and sizes of the paper documents.

This kind of argument opens the way for a disciplined inquiry into the differences in materiality between print and electronic textuality. As editor of the D. G. Rossetti Hypermedia Archive, McGann has had ample—one might almost say, painful—opportunity to appreciate the differences between the print and electronic text. Indeed, it is precisely this gap that leads him to think that John Unsworth's essay "The Importance of Failure" is so important. McGann's project is to convert the failure to make electronic textuality perform as an exact duplicate of print into a strength by using "deformation" as a tool for critical insight. He emphasizes the importance of doing and making, suggesting that practical experience in electronic textuality is a crucial prerequisite for theorizing about it. In this sense, his work represents an important advance over the rhetoric of the William Blake Archive (though not necessarily over its technical accomplishments), for he sees that electronic textuality can be used as something other than a simulacrum of print. Rather, he understands that it can provide a standpoint from which to rethink the resources of the print medium.

The impact of his experience is readily apparent in his redescriptions of print texts in terms that make them appear fully comparable to electronic texts. He argues, for example, that all texts are marked; he regards paragraph indentations and punctuation as forms of marking equivalent to HTML, the Hypertext Markup Language used to format documents for electronic environments. Moreover, he proposes that all texts are algorithmic, containing within themselves instructions to generate themselves as displays (the display form of the document here being considered distinct from the data and algorithms used to create it). So extensive and detailed are his redescriptions that one wonders if electronic text has any distinctive features of its own. The burden of his argument would suggest that it does not, an implication strengthened by his overly casual dismissal of the cases made by Janet Murray and Espen Aarseth for the specificities of electronic textuality.

When push comes to pixel, it is clear that McGann's primary allegiance is to print rather than electronic textuality. He repeatedly asserts that the resources of the electronic medium pale in comparison to print. Speaking specifically of fiction, he argues in Radiant Textuality that "there is no comparison…between the complexity and richness of paper-based fictional works, on the one hand, and their digital counterparts—hypermedia fiction—on the other." Although he is too astute a critic to make comparisons directly, by juxtaposing in the next sentence Stuart Moulthrop with Italo Calvino, McGann implies that Moulthrop, a contemporary pioneer in electronic hypertext, is not as good a writer as Calvino, or at any rate does not produce literature of the same quality. Like many arguments McGann mounts to prove the superiority of print, the implied comparison here between print and electronic literature is seriously flawed. It is obviously inappropriate to compare a literary medium that has been in existence for fifteen years with print forms that have developed over half a millennium. A fairer comparison would be print literature produced from 1550 to 1565, when the conventions of print literature were still in their nascent stages, with the electronic literature produced from 1985 to 2000. I believe that anyone familiar with both canons would be forced to agree it is by no means obvious that the print canon demonstrates conclusively the superiority of print as a medium for literary creation and expression. Given five hundred years in which to develop—if we can possibly stretch our imaginations this far—electronic literature may indeed prove itself equal or superior to print.

If, as Mrs. Malaprop observes, comparisons are odorous (i.e., odious), this one is especially so. As McGann acknowledges, it should not be a question of pitting one medium against the other but of understanding the specificities of each. By using electronic textuality to better understand print, McGann opens the way for important insights into its possibilities. Unfortunately, he is not as successful in using print to understand the specificities of electronic textuality. When problems crop up in his arguments, they almost always stem from this source. He asserts, for example, that print text differs from itself, and he uses close readings to argue the point. But his argument confuses what happens in the mind of the reader with the stability of print in a given document. To demonstrate that print is unstable even at the level of a document, he scans a document with an optical character reader and reports that the machine gives different readings on different scans. However, this experiment does not demonstrate that print is not self-identical, but only that the translation between print and electronic text is unstable.

In other arguments, he conflates the instability of a text—for example, variations in different copies of an edition or between different editions—with the instability of a print document, again to argue that print, like electronic text, is fluid and unstable. The stubborn fact remains, however, that once ink is impressed on paper, it remains relatively stable and immovable. The few exceptions that might be invoked—for example, an artist's book created with thermochromic ink that changes color when heated by a hand touch, or print impressed on cutouts that move—should not be allowed to obscure the general observation that the print of a given document is stable for (more or less) long periods of time, in dramatic contrast to the constant refreshing of a computer screen many times each second. Moreover, print does not normally move once impressed onto the paper fiber, again in contrast to the animations, rollover, and other such features that increasingly characterize electronic literature. No print document can be reprogrammed once the ink has been impressed onto the paper, whereas electronic texts routinely can. These differences do not mean, of course, that print is inferior to electronic text, only that it is different. Admitting these differences does not diminish the complexity and flexibility of print books, which have resources different than those of electronic texts; but it does pave the way for understanding the specificities of electronic textuality and, thereby, coming to a fuller appreciation of its resources.

What, then, are these differences, and what are their implications for theories of textuality? Mats Dahlström tackles this question in his exploration of how notions of a scholarly edition might change with electronic textuality. He makes the important point, also noted by Anna Gunder in "Forming the Text, Performing the Work," that with electronic texts there is a conceptual distinction—and often an actualized one—between storage and delivery vehicles, whereas with print the storage and delivery vehicles are one and the same. With electronic texts, the data files may be on one server and the machine creating the display may be in another location entirely, which means that electronic text exists as a distributed phenomenon. The dispersion introduces many possible sources of variation into the production of electronic text that do not exist in the same way with print, for example, when a user's browser displays a text with different colors than those the writer saw on her machine when she was creating it. More fundamental is the fact that the text exists in dispersed fashion even when it is confined to a single machine. There are data files, programs that call and process the files, hardware functionalities that interpret or compile the programs, and so on. It takes all of these together to produce the electronic text. Omit any one of them, and the text literally cannot be produced. For this reason it would be more accurate to call an electronic text a process than an object. Certainly it cannot be identified with, say, a diskette or a CD-ROM, for these alone can never produce a text unless they are performed by the appropriate software running on the appropriate hardware.

Let me emphasize that this processing is necessarily prior to whatever cognitive processing the user performs to read and interpret the text. Although print readers perform sophisticated cognitive operations when they read a book, the printed lines exist as such before the book is opened, read, or understood. An electronic text does not have this kind of prior existence. It does not exist anywhere in the computer, or in the networked system, in the form it acquires when displayed on screen. After it is displayed, of course, readerly processing may occur, as with print. But we should not indulge in the logical confusion that results when it is assumed that the creation of the display—a process that happens only when the programs that create the text are activated—entails the same operations as a reader's cognitive processing. In this sense, electronic text is more processual than print; it is performative by its very nature—independent of whatever imaginations and processes the user brings to it, and regardless of variations between editions and copies.

Acknowledging these differences, Mats Dahlström argues that electronic text should be understood as consisting, at bottom, of binary code, the sequences of ones and zeros that underlie all the languages built on top of them. But defining electronic text in this way, a move reminiscent of Friedrich Kittler's argument in "There Is No Software," inexplicably privileges binary code over all the other things necessary to produce the text as a document a user can read. In insisting further that electronic text is above all a pattern, Dahlström risks reinscribing the dematerialization so prominently on display in Shillingsburg's definition of "text" as a sequence of words and pauses. If the idea of print text as a dematerialized entity is already a fiction (however convenient), how much more fictional is the idea of an electronic text as binary code, when how that code is stored, processed, and displayed is utterly dependent on the nature of the hardware and software? Perhaps it is time to think the unthinkable—to posit a notion of "text" that is not dematerialized and that does depend on the substrate in which it is instantiated. Rather than stretch the fiction of dematerialization thinner and thinner, why not explore the possibilities of texts that thrive on the entwining of physicality with informational structure?

This is where I think McGann is trying to go with his argument that texts are never self-identical, an insight he is developing further in his present work on the quantum nature of textuality (i.e., textuality that is unresolvably ambiguous until a reader interacts with it in a specific way). As we have seen, if one accepts the physicality of the text, then the door opens to an array of infinite difference, with no text identical to any others because there are always differences between any two physical objects, however minute. Although McGann does not fully develop the point with regard to electronic textuality, his argument that a text is not physically self-identical (which he applies mostly to print) is mere common sense with electronic texts. Consider, for example, the time it takes images to appear on screen when they are being drawn from a remote server. Certainly the time lag is an important component of the electronic text, for it determines in what order the user will view the material. Indeed, as anyone who has grown impatient with long load times knows, in many instances it determines whether the user will see the image at all. These times are difficult to predict precisely because they depend on the individual computer's processing speed, traffic on the Web, efficiency of data distribution on the hard drive, and other imponderables. This aspect of electronic textuality—along with many others—cannot be separated from the delivery vehicles that produce it as a process with which the user can interact. Moreover, for networked texts, these vehicles are never the same twice, for they exist in momentary configurations as data packets are switched quickly from one node to another, depending on traffic at the instant of transfer. In this respect and many others, electronic texts are indeed not self-identical. As processes they exhibit sensitive dependence on temporal and spatial contexts, to say nothing of their absolute dependence on specific hardware and software configurations. Rita Raley points to this aspect of electronic textuality in her emphasis on performance. Seeking to locate the differences between print and electronic texts, she remarks, "The operative difference of hypertext can only be revealed in the performing and tracing of itself, in its own instantiation."

What are the consequences of the idea that textuality is instantiated rather than dematerialized, dispersed rather than unitary, processural rather than object-like, flickering rather than durably imprinted? The specter haunting textual criticism is the nightmare that one cannot then define a "text" at all, for every manifestation will qualify as a different text. Pervasive with electronic texts, the problem troubles notions of print texts as well, for as physical objects they also differ from one another. But this need not be a catastrophe if we refine and revise our notion of materiality.

Let us begin rethinking materiality by noting that it is impossible to specify precisely what a book—or any other text—is as a physical object, for there are an infinite number of ways its physical characteristics can be described. Speaking of an electronic text, for example, we could focus on the polymers used to make the plastic case or the palladium used in the power cord. The physical instantiation of a text will in this sense always be indeterminate. What matters for understanding literature, however, is how the text creates possibilities for meaning by mobilizing certain aspects of its physicality. These will necessarily be a small subset of all possible characteristics. For some texts, such as Edwin Schlossberg's artist's book Wordswordswords, the activated physical characteristics may include the paper on which the words are impressed. For other texts, the paper's contribution may be negligible.

The following definition provides a way to think about texts as embodied entities without falling into the chaos of infinite difference: The materiality of an embodied text is the interaction of its physical characteristics with its signifying strategies. Centered in the artifact, this notion of materiality extends beyond the individual object, for its physical characteristics are the result of the social, cultural, and technological processes that brought it into being. As D. F. McKenzie has argued in the context of the editorial theory of "social texts," social processes too are part of a text's materiality, which leads to the conclusion that it is impossible to draw a firm distinction between bibliographic and interpretive concerns. In Bibliography and the Sociology of Texts, his influential Panizzi lectures, McKenzie comments, "My own view is that no such border exists." Because materiality in this view is bound up with the text's content, it cannot be specified in advance, as if it existed independent of content. Rather, it is an emergent property. What constitutes the materiality of a given text will always be a matter of interpretation and critical debate; what some readers see as physical properties brought into play may not appear so to other readers. But this is not the end of the world as textual criticism has known it. Indeed, it is normal procedure for literary scholars to consider a "text" as something negotiated among a community of readers, infinitely interpretable and debatable. McKenzie's definition of "text" includes "verbal, visual, oral and numeric data, in the form of maps, prints, and music, of archives of recorded sound, of films, videos, and any computer-stored information." Moreover, he emphasizes that the recognized negotiations that occur with print works should be extended to electronic works.


Copyright notice: Excerpt from pages 89-104 of My Mother Was a Computer: Digital Subjects and Literary Texts by N. Katherine Hayles, published by the University of Chicago Press. ©2005 by the University of Chicago. All rights reserved. This text may be used and shared in accordance with the fair-use provisions of U.S. copyright law, and it may be archived and redistributed in electronic form, provided that this entire notice, including copyright information, is carried and provided that the University of Chicago Press is notified and no fee is charged for access. Archiving, redistribution, or republication of this text on other terms, in any medium, requires the consent of the University of Chicago Press. (Footnotes and other references included in the book may have been removed from this online version of the text.)


N. Katherine Hayles
My Mother Was a Computer: Digital Subjects and Literary Texts
©2005, 288 pages, 3 line drawings
Cloth $55.00 ISBN: 0-226-32147-9
Paper $22.00 ISBN: 0-226-32148-7

For information on purchasing the book—from bookstores or here online—please go to the webpage for My Mother Was a Computer.



See also: