previous chapter
Ian Lancashire
Cybertextuality and Philology
next chapter

23.

Cybertextuality and Philology

Ian Lancashire

What is Cybertextuality?

Employed sometimes as a synonym for "digital text" and "reading machine," the term "cybertext" is any document viewed from the perspective of the theory or study of communication and control in living things or machines. This theory emerged in 1948 in a book titled Cybernetics by the American mathematician Norbert Wiener (1894–1964). Cybernetics derives from the Greek kubernetes, meaning "steersman." Wiener calls it "the theory of messages" (Wiener 1950: 106; Masani 1990: 251–2) because it theorizes communications among animals and machines. I refer to Wienerian messages in any cybernetic system as cybertexts (Lancashire 2004). They come in pairs, an utterance-message and its feedback-response, both of them transmitted through a channel from sender (speaker/author) to receiver (listener/reader) through an ether of interfering noise. Five control modules (sender, channel, message, noise, and receiver) affect forward-messaging (the utterance) and messaging-feedback (the model) through this cybernetic channel. Wiener's insight, and the raison-d'être of cybernetics, is that anyone authoring a message steers or governs its making by feedback from those who receive it. Norbert Wiener defined the mechanics of cybernetics, and Claude Shannon, who founded information science, devised its equations.

Many sciences, having lionized the brainchild of Wiener and Shannon for a time, went on with their business as usual, and the digital humanities have been no exception. The postmodernist François Lyotard in The Postmodern Condition derived his "agonistic" model for communication and his "theory of games" (Galison 1994: 258) from cybernetics. Donna Haraway's cyborg manifesto also builds on cybernetics. Postmodernists tend to draw on cybernetics as a fund of metaphors, but the mechanics of messages, whether they are instructions relayed to a missile system by radar, or poems, have measurable feedback. Under a decade ago, Espen Aarseth coined the term "cybertext" in his book of the same name. All cybertexts, Aarseth says, are "ergodic," a word that Wiener uses and that etymologically means "path of energy or work": a cybertext requires a reader to do physical work aside from turning pages and focusing his eyes. By throwing stalks (if reading the I Ching) or managing a computer process with a mouse or a keyboard, the human operator-reader partners with an active and material medium and so becomes part of a figurative reading "machine for the production of variety of expression" (1997: 3). To Aarseth, the cybertext is that machine. It includes the reader, his request, and a computer that parses his request, sends it to a "simulation engine" that assembles digital objects for a "representation engine" that passes it along to a synthesizer or interface device, which returns feedback to the reader. There is no author or noise in Aarseth's diagram. It cuts Wiener's system in half.

Another computing humanist who cites Norbert Wiener is Katherine N. Hayles. In her How We Became Posthuman: Virtual Bodies in Cybernetics, Literature, and Inform-atics (1999), Hayles also drops Wiener's sender/author in order to focus on his receiver/observer, following the concept of autopoesis, a self-organizing principle that she finds in the cybernetic research of the South American scientist, Humberto Maturana. He observes that, in seeing things, a frog "does not so much register reality as construct it," a theme he generalizes in the maxim, "Everything said [in a cybernetic system] is said by an observer" (xxii, cited by Hayles 1999: 135). That is, living things do not receive messages passing through a noiseless channel in the state that they were sent. In constructing or modeling messages, readers change what authors utter: new messages result. Maturana, Hayles, and many others neglect the sender/author, although authors and readers are one and the same because authors receive and construct their own utterances.

Cybertextuality systematically applies cybernetics to speaking and writing as well as listening and reading. Consider how the five things studied by literary researchers parallel the five modules of cybernetics. There are authors of texts (studied in stylistics, biography, and authorship attribution; Lancashire 2005), readers of texts (studied in theory, criticism, and usability studies), texts themselves (studied in close reading and encoding), and the language technologies and media, which together form a channel, that govern their transmission through wear-and-tear that might be called literary noise (studied in stemmatics, textual criticism, and media).

This five-part model supports two kinds of messages or cybertexts: an outward message, from author to reader, that is encoded with redundancy to resist the damaging effects of noise; and a feedback response that variously reflects the state in which the original message arrived. Authors bring to bear all the techniques of grammar and rhetoric to repeat and vary what they want to say so that the reader can understand it. Cybertextually speaking, authors encode redundant data in their cybertexts so that they can withstand interference by noise and can reach readers comprehensibly. The thought that an author hopes to communicate is the content of a cybertext, and the degree of its rhetorical treatment (the encoding) marks how dense that content or information will be. (Sometimes the content is its encoding.) As redundancy increases, the information density of a cybertext (in information science, its entropy) lessens. Inadequate encoding, and persistent extraneous noise, may lead a reader to mistake or doubt what a dense, underspecified text says. Textual criticism studies noise, that is, the effects that the channel itself (e.g., a scriptorium, a printing press, a noisy room) and extraneous interference (e.g., the interpretative readings that scribes and compositors sometimes impose on texts that they are copying) have on cybertexts. The complementary feedback cybertext, which registers that the outgoing message has been received, can be used to detect errors in transmission. The humanities enshrine this reader-to-writer, receiver-to-sender feedback in teaching, discussion, and peer-review. The digital humanities supplement the human reader-receiver by developing text-analysis software that feeds back semi-automatic analyses of its texts. Working with the information sciences, the digital humanities do not leave the reader-receiver to chance but build one. In this respect, computing humanists bring together human and machinic communications to form a cybernetic package.

Cybertextual Simulations

Cybertextuality, to recapitulate, is a subset of cybernetics that studies the messaging and feedback modules — author, text, channel/media, noise, and reader — of languages and literatures. It is difficult to analyze this system directly because authors create reclusively. Computing humanists, for that reason, employ purpose-built simulations. Simulators enable us experimentally to test the theory and practice of cybertexts. Because this is an unusual way of thinking about what we are doing, I will break down the simulations by cybernetic module: cybertext, receiver/reader, noise, channel, and author/sender.

Poetry generators, some computer text games, and chatterbots are artificial or simulated author-agents. We can study how they implement natural language or we can analyze the cybertexts that they make, which include poems, stories, and conversation. The software simulates authors and outputs algorithmic cybertexts. The last Loebner Prize for the best chatterbot, in 2005, went to Rollo Carpenter for his Jabberwacky. Authoring software often spoofs writing. For instance, three MIT students, Jeremy Stribling, Max Krohn, and Dan Aguayo, devised SCIgen — An Automatic CS Paper Generator, "a computer program to generate nonsensical research papers, complete with 'context-free grammar,' charts and diagrams." They submitted "Rooter: A Methodology for the Typical Unification of Access Points and Redundancy" to the ninth World Multi-Conference on Systematics, Cybernetics, and Informatics, which initially accepted it as a non-reviewed paper.

Media studies treat the cybernetic channel. Marshall McLuhan conflated the Wienerian cybertext with the channel when he said that the medium was the message. Mark Federman of the McLuhan Program explains this equation: "We can know the nature and characteristics of anything we conceive or create (medium) by virtue of the changes — often unnoticed and non-obvious changes — that they effect." Within cybertextuality, most research on the channel's impact on the utterances it carries takes place in usability studies. What affordances, an information scientist asks, does the channel have with respect to the communication of a message? Computer-mediated mail, webcasting, cell phone, printed book, posted letter, meeting, and lecture instantiate differently this cybernetic channel. The Knowledge Media Design Institute (Toronto) produces software named ePresence Interactive Media that enables remote viewers to take part in webcasts. Because computer workstations are themselves a channel, cybertextuality mainly analyzes how novel electronic media affect authoring and reading. Simulation of non-electronic channels by software is uncommon.

The digital humanities applies what we know of cybertext noise to model how a text metamorphoses over time. Computer-based stemmatics reconstructs the timeline of a work as noise alters it, state by state, in moving through the channel we call manuscript scriptoria and printed editions. Cladograms from Phylogenetic Analysis using Parsimony, software devised by the Canterbury Tales project, gives a schematic tree of snapshots of a cybertext as it falls through time from one state to another (Dunning 2000). Stephen E. Sachs' The Jumbler, which algorithmically mixes up every letter in a word but the first and the last, simulates noise. A sentence muddled in this way can still be read. (A snceente muldded in tihs way can sltil be raed.) If our subject of research were noise itself, we would study how living scribes alter their exemplars in copying them. When people retell events from memory after a time, they suffer from another kind of noise, what psychologists name the post-event misinformation or retelling effect. Collational software can readily identify and time-stamp the emergence and reworking of textual variants. This cybertextual experiment would help classify the kind of errors that scribes make and complicate over time.

We create text-analysis programs that act as artificial or simulated readers to give feedback on cybertexts. By running Robert Watt's Concordancer on any work, for example, we act as stand-ins for the work's author in remitting it to a computer reader for feedback. In both simulations, we control, as far as possible, the noise of the simulated channel by choosing the most error-free common carrier of our time: the workstation. For all our complaining, it is a reliable device, enabling many people to repeat the same cybertextual experiment many times, in many places. This reliability allows us to do experiments together and to agree on criteria whereby our research can be falsified.

Some cybertextual simulators appear impoverished when compared to human beings. Concordances output indexes and rearrange a text rather than respond, like a reader, to one. The Jumbler, Jabberwacky, and SCIgen play word-games. Other simulators devised by artificial intelligence and computational linguistics teach us much about communication and can stand in routinely for human beings. Machine-translation (MT) systems read text in one language and feed back a version of that text in another language. The morphological, lexical, syntactic, and semantic analyzers in a classical MT system have helped us understand how language works. Recent story-generation programs like MakeBelieve from MIT Media Lab use Open Mind, a web database of 710,000 common-sense facts, as well as ConceptNet, a semantic network built on top of it (Liu and Singh 2002). The infrastructure by which it assembles little coherent stories from sample opening sentences uses a constituent structure parser, fuzzy matching, and a sentence generator. If these simulations or modeling programs had access to the same high-performance computing resources employed in protein-folding, cosmology, weather prediction, and airframe design, the results might be still more impressive.

The Cybertextual Cycle

The digital humanities have a helpful cognate discipline in cognitive psychology, which experimentally analyzes how the living speaker-writer and the living listener-reader manage their language, both message and feedback, in a noisy world. The digital humanities also analyze this process, but only indirectly, through our stored cybertexts alone.

The journal Brain last year published Peter Garrard's remarkable study of works by the English novelist Iris Murdoch as she succumbed to Alzheimer's disease. Tests during her final years revealed a mild impairment of working memory (in digit span). After her death in 1999, an autopsy found "profound bilateral hippocampal shrinkage." A healthy hippocampus is necessary for working memory. By generating the simplest form of text analysis — word-frequency lists from early, middle, and last novels — Garrard found an increasing lexical and semantic "impoverishment." Although Jackson's Dilemma, her last novel, showed no deterioration in syntax, it had a "more restricted vocabulary," that is, a smaller number of word-types (different words) and, thus, a greater rate of lexical repetition. Her earliest novel exhibited about 5,600 word-types per 40,000 words, but in Jackson's Dilemma that ratio had fallen to 4,500, a drop of 20 percent. The final novel was not unreadable but critics described it as transparent, economical, scant of explanation, and "distrait" in narrative. The medical tests and autopsy make predictions that the computer text substantiates. Together, they tie semantic complexity and information density to the size of working memory and to the hippocampus. Garrard's case study shows that evidence of cognitive affects in language processing, those backed up experimentally, help explain the lexical minutiae of authoring at work, whether in making or remaking/reading text. These low-level mechanics are beyond our ability to observe consciously in ourselves as we utter phrases and sentences, but we can access them experimentally with the help of special tools.

The language powers movingly described in the case of Iris Murdoch have two cognitive staging grounds: working memory, where we consciously recall spoken language and written images of text so that we can work with them; and the big black box of unconscious neurological functions that working memory accesses but that we cannot access directly. Alan Baddeley, an English cognitive psychologist, proposed working memory in the mid-1970s. It remains today the standard theoretical model. In 2003 he published a schematic flowchart of how we consciously use working memory to speak aloud something we have read or heard (Figure 23.1).


Figure 23.1  Baddeley's proposed structure for the phonological loop. Source: Baddeley, Alan, 2003, "Working Memory and Language: An Overview." Journal of Communication Disorders 36.3: 189–203. Reprinted with permission from Elsevier.


We consciously place or retrieve a visually seen sentence into a visual short-term store, recode it elsewhere into phonological form, and then send that recoded data to an output buffer for pronouncing. The visual short-term store is a visuo-spatial sketchpad in working memory. Nothing in this visual short-term store can be processed as speech unless it is converted to auditory form. Mentally, we process language as speech, not writing. That is why sudden heard speech always gains our immediate attention: auditory data do not need to be re-encoded to access working memory. When we hear speech, we place it in another short-term store, this time a phonological one that Baddeley called the phonological loop. An utterance enters this short-term store and exits from it after a short time, unless we consciously rehearse or refresh it, as if memory were a loop. Then we send it to an output buffer for pronouncing.


Figure 23.2  Baddeley's three-component model of working memory. Source: Baddeley, Alan, 2003, "Working Memory and Language: An Overview." Journal of Communication Disorders 36.3: 189–203. Reprinted with permission from Elsevier.


Another schematic by Baddeley identifies the model's functions — input, analysis, short-term store, recoding, and output buffer — as being fluid (Figure 23.2). The contents come and go dynamically. However, the model draws on the brain's crystallized systems, which interpret what images signify linguistically, give long-term memory of events, and enable phonological, lexical, grammatical, and syntactic analysis. We store the content in these systems and it is available for use over a long period of time. In the classical model, Baddeley's language functions correspond, more or less, to locations in the brain. These locations and their language functions have been disputed. The classical Lichtheim-Geschwind model has the angular gyrus recoding phonologically visual data from the visual cortex at the back of the brain, passing it to Wernicke's area, which processes phonemic language semantically and in turn sends it to Broca's area, which shapes semantic text into syntactic form in preparation for articulation (Geschwind 1979). Recently, however, subcortical areas in the basal ganglia have been shown to regulate cognitive sequencing, including the motor controls that produce speech. Thus the phonological loop in working memory, which represents words as articulated, depends on the basal ganglia. Working memory uses multiple locations simultaneously, but it is associated with the hippocampus in the forebrain.

Baddeley's flowchart is a torus, a looping cybernetic channel for cybertext. What goes out as uttered speech authored by us in turn enters working memory as received heard speech to which we will give, in turn, feedback. Cognitive language processing is thus a cybertextual message-feedback system.

Creating takes place in the black box outside working memory. Normally we are not conscious of what we say before we say it. Few people script an utterance in working memory before uttering. The same holds true for written output, for although we may hear silently the text as we are storing it on screen or paper, we seldom preview it in working memory before typing it. There is often surprise in discovering what we have just said. As Baddeley's first flowchart shows, we are also not aware of what goes on in boxes A and D, the visual and phonological analyses of input sense data, before we store those analyses in working memory. Between the visual and auditory input, and our conscious reception of image and speech in working memory, there are two black boxes in which we actively construct what we think we see and hear. The so-called McGurk effect tells us that we unconsciously subvocalize a model of what someone else is saying. We then "hear" only what we analyze we are hearing. The last one to utter any heard utterance, in effect, is the hearer-observer. The McGurk experiment is described by Philip Lieberman 2000: (57, citing McGurk and MacDonald 1976):

The effect is apparent when a subject views a motion picture or video of the face of a person saying the sound [ga] while listening to the sound [ba] synchronized to start when the lips of the speaker depicted open. The sound that the listener "hears" is neither [ba] or [ga]. The conflicting visually-conveyed labial place-of-articulation cue and the auditory velar place of articulation cue yield the percept of the intermediate alveolar [da]. The tape-recorded stimulus is immediately heard as a [ba] when the subject doesn't look at the visual display.

The listener hears a sound that no one uttered. Cybertextually, a receiver co-authors, by virtue of his cognitive interpretation of auditory and visual inputs, a message from the sender.

It is not unreasonable to believe that the McGurk effect applies to all heard and seen words, from whatever source. If we utter speech unselfconsciously, not knowing what it will be until we experience it ourselves — that is, if we do not script our words in working memory before we say them (and to do so would be to reduce our conversation to an agonizingly slow, halting process) — and if we receive our own utterance auditorily (listening to our words as we speak them), then our brain must unconsciously model those words before they fall into our working memory. Whenever we author speech, therefore, we hear it at the same time as our listeners. Or, when we write something down, that utterance also reaches us through the visual cortex and must be decoded and analyzed cognitively before we can receive it in working memory. Cybertextually, the speaker receives his own message at the same time that his listener or his reader does. A message is always cognitively constructed by its receivers, and the author is always one of these constructing observers. A message passes through, simultaneously, different cybernetic channels for its sender and for every receiver. One goes back to the author, and others to audience members and readers.

The author's self-monitoring is not much studied. Feedback for every utterance is of two kinds. The author's initial cognitive construction of what he hears himself say or sees himself write, on paper or screen, is published in a moment of conscious recognition in his working memory. Second, the author's response to his own self-constructed message then helps him to revise his last utterance and to shape his next utterance. Thus, cybertextually, authors steer or govern their speech internally by a recursive message-feedback cycle. Unconsciously, the author creates a sentence and utters it (a messaging). By witnessing the utterance through his senses, in iconic memory, he first recognizes what he has created. That moment of recognition leads him to construct what he hears or sees (a feedback response). His construction of his own utterance, by reason of being held in working memory, becomes in turn another message to his mind's unselfconscious language-maker that enables him to rewrite or to expand on what he has just uttered. If we fully knew what we were saying in advance of saying it, if we were aware of our entire composition process consciously as it unfolded, there would be no moment of recognition and hence no internal cognitive feedback.

The McGurk effect complicates basic cybernetics by showing that every reader authors all messages received by him. Once we realize that every author also gives feedback to all his own messages, that authors and readers behave identically, then cybernetic process becomes cyclic. Uttering, viewed cybertextually, is cognitively recursive, complex, and self-regulatory (or, in Hayles' term, homeostatic). It is recursive because every utterance to the senses (vocalized or written), just by reason of being cognitively modeled, sets in motion an internal mimicking utterance in the speaker's mind. Uttering is complex because a single cybertext proceeds through different channels, one internal and many external, each of which is vulnerable to different types of noise. Uttering is self-regulatory by tending to dampen positive feedback, which accelerates a process beyond what is desired, with negative feedback, which brakes or reverses an action. Positive feedback in cybernetics urges that information density be increased. Negative feedback urges that information density be reduced. Authors govern this density by adding or deleting redundancy. Mechanically, James Watt's flyball governor receives information on the speed of the engine, information that can trigger its slowing down. Even so, the feedback that authors receive from themselves tells them how to revise what they are saying to make it more effective. Positive feedback tells us to repeat and vary; negative feedback, to simplify and condense. This self-monitoring, I argue, is partly available for study with text-analysis tools.

The Author's Self-monitoring

We now know a great deal about the cognitive mechanics of authoring. It is largely unconscious. Because we cannot remember explicitly how it utters sentences, we have concocted fugitive agents like the Muse to explain creativity. Etymologically, "muse" comes from the Indo-European base for "mind," that is, memory, of which we have several kinds. There is associative long-term episodic and semantic memory, from which we can extract information but can never read through as we do a book. Try as we may by probing long-term memory, in particular, we cannot recover the steps of the uttering process that creates a sentence. On occasion, we make one painstakingly in working memory, a short-term, explicit phonological store with a known capacity, 4 ± 2 items in Cowan (2000), revising George Miller's 7 ± 2 items, only as many words as we can utter aloud in two seconds, and confirmed by many experiments since then. The gist behind a sentence taking shape consciously in working memory, however, still comes from a cognitive unknown, "through a glass darkly." We use working memory to rearrange words in such semi-memorized sentences, but the authoring process that suggests them must be queried by a type of memory that we cannot recover as an episode or a semantic meaning. We know from experimentation that people absorb, in the short term, a non-recallable but influential memory of sensory experience. This is not the brief iconic or echoic memory of experience that lasts about 100 microseconds, the space of time between two claps of a hand. We are conscious of these traces. It is the residue they leave in the unconscious, a residue called priming, which is a form of implicit memory lasting several weeks. When we have implicit memory lasting indefinitely in perma-store, we call it procedural memory instead of priming. Here cognitive language processing and other motor skills (like bicycle riding) are stored. Although procedural memories are powerful, we can only recall them by performing them. Why can we not remember, step-by-step, how we make a sentence? The answer seems to be that there is no place in our brain to put the protocol, and maybe no names to identify what goes into it. Its complexity would likely overwhelm working memory many times over. Because we utter easily, we don't really need to know how we do it unless we enjoy frustration and take up stylistics and authorship attribution.

The only window of consciousness we have on our language processing is the phonological loop in working memory. Cowan's magic number, "four, plus or minus two" items (2000; cf. Baddeley 1975), governs all our speaking, listening, writing, and reading. What do we know about this humiliating limit? Reading-span tests show that we can recall a range of from 2 to 5.5 final words of a group of unrelated sentences (Just and Carpenter (1992)): a smaller number, but then we have to deselect all initial words in each sentence. The "word-length" effect shows that the number of items in working memory lessens as their length in syllables increases. We can be confounded at first by garden-path sentences such as "The horse raced past the ancient ruined barn fell" because a key word at the end of a clause (here the verb "fell") takes as subject, not the opportunistic word just preceding, but a noun at the beginning, on the very edge of working-memory capacity. The "articulatory suppression" effect confounds us in another way. If I repeat aloud a nonsense word, "rum-tum, rum-tum, rum-tum," it subvocally captures the working memory of my listeners and prevents them consciously attending to other words. We process words in working memory by drawing on the motor instructions that Broca's area gives to articulate them aloud. Another effect, termed "acoustic similarity," describes our difficulty in recalling a list of similar-sounding words. We can remember semantically related groups like "ghost," "haunt," "sprite," "poltergeist," "specter," "balrog," and "revenant" more easily than simple terms that sound alike, such as "lake," "rack," "trick," "rock," "cake," "coke," and "make." If working memory did not encode words acoustically, we would not experience this trouble.

Self-monitoring is partly conscious when taking place in working memory today, principally while we read what we have just written onto a page or word-processed onto a screen. We can only read, however, from five to nine acoustically-encoded items at a time. Baddeley 2004: (26) shows that recall errors, fewer than two for 9-letter words, more than double for 10-letter words (which must exceed working-memory limits). Only when we can chunk the items into larger units, as in real English words, does the error rate fall. Empirical research into the psychology of reading and writing, which teaches the teachers how to make children literate, reveals that our eyes read by saccades (jumps), fixations, and gazes, that a typical saccade takes twenty milliseconds and covers six—eight letters, and that fixations have a perceptual span of about three characters to the right and fifteen to the left (or four—five words in length) and last 200–300 milliseconds unless they settle into a gaze. College-level students jump and fix their eyes 90 times for every 100 words; one-quarter of those jumps go backwards (Crowder and Wagner 1992: Table 2.2). Function words get briefer fixations than content words (Gleason and Ratner 1998: fig. 5.4). We evidently use iconic memory to keep a continuing image of a cybertext before ourselves so that we can free up our working memory for other items, such as higher-level entities than words. It is thought that these entities are propositions, each of which holds an argument and a predicate. Examples of propositions are subject—verb and adjective—noun combinations such as "The sun is a star" and "bright as Venus." The average reading span, calculated to be between six and twelve propositions, that is, between twelve and twenty-four arguments and predicates, can pack a lot into working memory because we can chunk words into higher-level entities. In other words, we form concept maps in which high-level propositions are placeholders for many lower-level items. An item can be a word (argument or predicate), a proposition (a combined argument-predicate), or one or more related propositions.

Consider the first two lines in the first quatrain of Shakespeare's sonnet 73. The 18 words in the first two lines cannot fit separately into the phonological loop (PL) working memory (WM) unless they are chunked. Six propositions can do the job (Figure 23.3).

The absolute maximum size of working memory can be as much as ten times four, plus or minus two, but only if words and propositions are fused into a schema. Although these two lines have at least twenty argument-predicate propositions, they can be fitted into the average reading span with some chunking, if working memory handles only propositions 1, 2, 3, 8, 9, and 10, but these are spread out over six levels, some more deeply encoded than others. Of course, chunking becomes easier if the text appears constantly before one's eyes in iconic memory for ready reference. Three quatrains and a couplet would have 21 ± 21 ± 21 ± 10 ± 3 propositions, or 76 in all (within about 125 words), well inside the maximum size of working memory, 40, plus or minus 20 basic items, heavily chunked. See Figure 23.4.


Figure 23.3  The first quatrain of Shakespeare's sonnet.


Three-digit area code and exchange (if chunked to one each), and local number, fall just within working memory for unrelatable numbers. Cybertexts are more heavily redundant than numbers and are integrated syntactically. The maximum reading span for meaningful text would appear to be sonnet-sized, that is, about the length of the Lord's Prayer or the Ten Commandments. The average reading span would cover a shorter span, perhaps a quarter of that, about the amount of text in the 20-word oath like "I do solemnly swear to tell the truth, the whole truth, and nothing but the truth, so help me God." The quantitative limits of working memory in authoring and reading serve an important purpose for cybertextuality: they define the dynamic playing field for an author's currente calamo revisions of his work. In size, this cognitive field covers a quatrain-to-sonnet-sized window of text that shifts forward in jumps, as the author is uttering a text. If we examine how the author manipulates text within this window, we may find some new measures for authorship. They might include the size of the author's uttering window (how many words the author inputs without looking up to absorb them and to generate internal feedback), the types of editorial changes the author carries out inside that window (are they transformative in a syntactic sense, or are they just diction-related?), and the speed with which he moves through the composing process.


Figure 23.4  Propositions in the first quatrain of Shakespeare's sonnet 73.


Computer Text Analysis and the Cybertextual Cycle

Traditional stylistics investigates a text spatially as a chain of words that, though it may take time for the reader to traverse, exists complete in a single moment. It has a three-dimensional size on the page but is frozen in time. Most text-analysis programs count the number and frequency of words, phrases, and sentences but do not analyze the text's dynamics as it unfolds. In both making and reading, however, an author has dynamic strategies in revising what he utters, moment by moment. Quantitative information about the frequency and proximity of words and phrases to one another characterizes a whole flat text, or samples of it that are judged representative. However, a text-analysis method captures aspects of the author's cognitive idiolect, such as the ways in which he reacts, positively or negatively, to the feedback that his unselfconscious cognitive modeling gives him. Thus style is partly a function of the author's internal cybertextual message-feedback cycles.

My traditional text-analysis studies of Chaucer and Shakespeare highlight features that cognitive processing can explain. Phrase concordancers identify repeated fixed phrases and unfixed collocations in Chaucer's The General Prologue that recur elsewhere in the entire Canterbury Tales. The same holds true of Shakespeare's Troilus and Cressida in his body of work. Their natural vocabularies appear to have been phrasal in nature, comprising units no longer than four, plus or minus two terms, which I called phrasal repetends. Collgen, a program included within TACT, detected them. No repeated phrases exceed in length the capacity of the phonological store of working memory. Their phrasal repetends also form clusters. If we recursively concord a concordance of their repeated words and phrases, we find associative networks. Such clusters must arise from the structure of their long-term memory. I found unusual peaks of overlap in phrasal repetitions between Chaucer's General Prologue and the last tale in the work, the Manciple's (Lancashire 1993a, 1993b). These clustered repetitions did not arise from subject matter specific to either poem; they belonged to Chaucer's free-floating, current language pool. The shared phrasal repetitions suggest that he wrote the first and the last poem in the Canterbury Tales at about the same time. A comparable overlap ties Shakespeare's Troilus to Hamlet and other plays written in the same period.

I analyzed one speech in particular from Shakespeare's problem play, Troilus and Cressida, the Greek leader Agamemnon's disgruntled address to his generals before the walls of Troy (Lancashire 1999). Without looking for authorial self-monitoring, I found it. This perplexing speech breaks into two mirroring sonnet-sized passages, lines 1–16 and 16–29. Repeated collocations appear in bold, words that occur only once in Shakespeare's vocabulary are in large capital letters, and words that occur twice in small capitals (Figure 23.5).

Shakespeare anchors this two-part speech on five words, all repeated in each part: "princes" and "cheeks," "trial," "action" or "works," and "matter." Each part begins with a question. The right-hand glosses highlight the similar sequence of propositions mirrored in both parts. Despite the novel Latinate words, Shakespeare drew much from associational clusters in his long-term memory. Ten collocations here, each occurring four or more times in his works, form two clusters. One centers on the node word "grow" (7), which collocates with "cheeks" (14), "sap" (7), "knot" (6), "pine" (5), "prince" (5), "veins" (5), and "check" (4) and is found in the first part of his speech. Another centers on the words "winnows" (5) and "wind" (4) in the second part. The word "trial" connects the two parts. Shakespeare seems to have created lines 1–16 in a single dash, then paused, taking it in. His dynamic cognitive field appeared to be about a sonnet in length. Seeing the difficulty of what he had written, he then "explained" it to himself again. Feedback led to repetition with variation within a second identified structured segment, steered by feedback, his recognition and reconstruction of lines 1–16.

Feedback governs the information richness of the Agamemnon speech. The first part has six strange words, an acoustic pun ("check" and "cheek"), and almost no repetitions. It is vulnerable to misconstruction by Shakespeare's audience. His satisfaction with the dense imagery and novel language gave him positive feedback and led him to add six neologisms to the second part. Yet doubts about meaning led Shakespeare to negative feedback. And so he introduced redundancy in the form of restatement. He adjusted the rate of information, a cybertext's "entropy" (Pierce 1961: 80) by simultaneously accelerating and braking the passage's information density. Because written speech lifts the capacity constraints of working memory, and everything Shakespeare wrote remained accessible to his eyes long after uttering, he could allow density to increase. Shakespeare's cybertextual style uses repetition with variation of large units to counterbalance the unrelenting lexical strangeness he cultivates, devising new words, fixing them in striking metaphors. He combines positive and negative feedback.


Figure 23.5  Chunked six-proposition schema of Shakespeare's sonnet 73.


A New Philology

Central to old philology are historical linguistics and a literary scholarship that applies what we know about language to the understanding of literature in its historical period. Typical old-philology tools include the concordance and the dictionary. Both read homogeneous, flat, unchanging material texts; and computerized concordancers do not change that fact. New philology supplements this traditional text analysis with tools from cognitive psychology and information science. These include brain-imaging devices and word-processing software like Tech Smith Morae, which records keystrokes, mouse actions, screens, the writer's face, and other events that take place in a composing session. These read dynamic, non-material texts of authors at work in composing. Both old and new philology, when employing computer tools, are cybertextual to the extent that they apply evidence of the author's use of his memory systems, as in self-monitoring and cognitive-feedback processing, to interpret the flat material text that he utters.

The most promising applications of the new philology are in the future. It is because the computer workstation can supplement the capacity of working memory that we can observe and measure how an author today responds to his own in-progress, window-by-window uttering of a work. Access to brain-imaging technology is beyond the means of literary researchers without medical qualifications. However, the word-processor enables a writer to solve garden-path sentences, to transform large verbal units (sentences, paragraphs) on the fly, and rapidly to adjust the redundancies he builds into his work. And every event in a word-processed document — continuations, deletions, additions, transpositions — can be time-stamped and counted. Lexical events, especially ones resulting from using online thesauri and lexicons, can be tabulated. Phrasal- and clausal-level events can be described grammatically: passive constructions made active, verb tenses altered, conjuncts replaced by subjuncts, and so on. A new cybertextual stylistics can be based on quantitative information available from the archived word-processing sessions that go into creating a text. Usability software undertakes the kind of archiving that John B. Smith, a pioneer in thematic criticism in English studies, and a process-based writing researcher, applied in his protocol analysis of living writers. Smith believed that asking writers to think aloud as they wrote, a technique Linda Flower introduced, injected noise into that process, and so he used instead a writer's keystrokes, sequences of which he interpreted as actions (Hayes and Flower 1980). Tracking, replaying, parsing, and displaying the writing process replaced Flower's thinking-aloud protocol. Santos and Badre inferred 73 percent of a writer's mental chunks through automatic analysis of keystrokes (1994: 75). Only semi-automatic analysis could tame the mass of data from such recording sessions.

Morae offers some of the functionality that Smith describes. Its name, from the Latin term for "delay," names a unit of sound in natural language. The software records keystrokes, mouse actions, screens, the writer's face, and other events that take place in a composing session. This protocol can log each keystroke (including modifying keys) with its elapsed time, and its time of day and date, and can link it to a video of the computer screen as well as to the image of the author at work, at that very moment. A distribution graph of that activity log shows the density of keystrokes at any instant during the keyboarding session. A partial spreadsheet, image, and graph of a Morae session in which I worked on this essay (Figure 23.6) reveals a rhythmic chunking in the uttering process.

A Morae graph can be exported to a spreadsheet for analysis; and one can play back, in a sequence of recorded sessions, the making of an entire work. These record cybertextual self-monitoring at work.

Even one recording session by a Shakespeare or a James Joyce would be invaluable evidence of the author's cognitive self-monitoring while composing. Their habits, of course, can only be inferred from texts and foul papers. For most writers, we do not even have the equivalent of Joyce's extraordinary notebooks, such as the well-known "Circe 3" (1972: 278–83) from Ulysses, which holds his brief notes in vertical and horizontal lists, many crossed out in colored crayon after he used them in revision. It is hard to resist interpreting Joyce's notes, most of which fall between one and nine words, as direct, cognitively chunked deposits from his limited-capacity, very human working memory. Short lyrics, especially the 14-line sonnet, may also remain the most popular verse forms because of cognitive chunking: they can hold the maximum number of propositions that we can consciously encode in memory. Yet the truth is that we will never know how the minds of these writers worked. If present and future writers, however, record their authoring sessions and donate them, with their letters and papers, for study, critics bent on text analysis, close reading, and stylistics will inherit a windfall of data and new measures. Cybertextuality then will be a widely testable theory.


Figure 23.6  Part of my writing session in Morae.


Selected References

Aarseth, Espen J. (1997). Cybertext: Perspectives on Ergodic Literature. Baltimore: Johns Hopkins University Press.

Baddeley, Alan (2003). "Working Memory and Language: An Overview." Journal of Communication Disorders 36.3: 189–203.

Baddeley, Alan (2004). Your Memory: A User's Guide. New Illustrated Edition. Richmond Hill, Ontario: Firefly Books.

Baddeley, Alan, Neil Thomson, and Mary Buchanan (1975). "Word Length and the Structure of Short-term Memory." Journal of Verbal Learning and Verbal Behavior 14.6: 575–89.

Carpenter, Rollo (2006). Jabberwacky. Icogno. URL: <http://www.jabberwacky.com>. Accessed May 8, 2006.

Cowan, Nelson (2000). "The Magical Number 4 in Short-term Memory: A Reconsideration of Mental Storage Capacity." Behavioural and Brain Sciences 24: 87–185.

Crowder, Robert G., and Richard K. Wagner (1992). The Psychology of Reading: An Introduction. New York: Oxford University Press.

Dunning, Alastair (2000). "Recounting Digital Tales: Chaucer Scholarship and The Canterbury Tales Project." Arts and Humanities Data Service. <http://ahds.ac.uk/creating/case-studies/canterbury/>.

ePresence Consortium (2006). ePresence Interactive Media. University of Toronto: Knowledge Media Design Institute. <http://epresence.tv/mediaContent/default.aspx>. Accessed May 8, 2006.

Federman, Mark (2004). "What is the Meaning of The Medium is the Message?" <http://individual.utoronto.ca/markfederman/article_mediumisthemessage.htm>. Accessed May 8, 2006.

Galison, Peter (1994). "The Ontology of the Enemy: Norbert Weiner and the Cybernetic Vision." Critical Inquiry 21: 228–66.

Garrard, Peter, L. M. Maloney, J. R. Hodges, and K. Patterson (2005). "The Effects of Very Early Alzheimer's Disease on the Characteristics of Writing by a Renowned Author." Brain 128.2: 250–60.

Geschwind, Norman (1979). "Specializations of the Human Brain." The Brain. A Scientific American Book. San Francisco: W. H. Freeman, pp. 108–17.

Gleason, Jean Burko, and Nan Bernstein Ratner (Eds.) (1998). Psycholinguistics, 2nd edn. Fort Worth: Harcourt Brace.

Haraway, Donna (1985). "A Manifesto for Cyborgs: Science, Technology, and Socialist Feminism in the 1980s." Socialist Review 80: 65–108.

Hayes, J. R., and L. S. Flower (1980). "Identifying the Organization of Writing Processes." In L. W. Gregg and E. R. Steinberg (Eds.). Cognitive Processes in Writing. Hillsdale, NJ: Lawrence Erlbaum.

Hayles, N. Katherine (1999). How We Became Post-human: Virtual Bodies in Cybernetics, Literature, and Informatics. Chicago: University of Chicago Press.

Just, Marcel A., and Patricia A. Carpenter (1992). "A Capacity Theory of Comprehension: Individual Differences in Working Memory." Psychological Review 99: 122–49.

Lancashire, Ian (1993a). "Chaucer's Phrasal Repetends and The Manciple's Prologue and Tale." In Computer-Based Chaucer Studies. CCHWP 3. Toronto: Centre for Computing in the Humanities, pp. 99–122.

Lancashire, Ian (1993b). "Chaucer's Repetends from The General Prologue of The Canterbury Tales." In The Centre and its Compass: Studies in Medieval Literature in Honor of Professor John Leyerle. Kalamazoo, MI: Western Michigan University, pp. 315–65.

Lancashire, Ian (1999). "Probing Shakespeare's Idiolect in Troilus and Cressida I.3.1–29." University of Toronto Quarterly 68.3: 728–67.

Lancashire, Ian (2004). "Cybertextuality." TEXT Technology 1–18.

Lancashire, Ian (2005). "Cognitive Stylistics and the Literary Imagination." Companion to Digital Humanities. Cambridge: Cambridge University Press, pp. 397–414.

Lieberman, Philip (2000). Human Language and Our Reptilian Brain: The Subcortical Bases of Speech, Syntax, and Thought. Cambridge, MA: Harvard University Press.

Liu, Hugo, and Push Singh (2002). "MAKEBE-LIEVE: Using Commonsense Knowledge to Generate Stories." Proceedings of the Eighteenth National Conference on Artificial Intelligence, AAAI 2002. Edmonton: AAAI Press, pp. 957–58. <http://agents.media.mit.edu/projects/makebelieve/>.

Lyotard, Jean François (1984). The Postmodern Condition: A Report on Knowledge (Geoff Bennington and Brian Massumi, Trans.). Minneapolis: University of Minnesota Press.

Masani, R. P. (1990). Norbert Wiener 1894–1964. Basel: Birkhäuser.

McGurk, H., and J. MacDonald (1976). "Hearing Lips and Seeing Voices." Nature 263: 747–8.

Miller, G. A. (1956). "The Magical Number Seven, plus or minus Two: Some Limits on our Capacity for Processing Information." Psychological Review 63: 89–97.

Pierce, John R. (1961). An Introduction to Information Theory: Symbols, Signals & Noise, 2nd edn. New York: Dover, 1980.

Sachs, Stephen (2006). The Jumbler. <http://www.stevesachs.com/jumbler.cgi>. Accessed May 8, 2006.

Santos, Paulo J., and Albert N. Badre (1994). "Automatic Chunk Detection in Human-computer Interaction." Proceedings of the Workshop on Advanced Visual Interfaces. New York: ACM, pp. 69–77.

Smith, John B., Dana Kay Smith, and Eileen Kupstas (1993). "Automated Protocol Analysis." Human–computer Interaction 8: 101–45.

Stribling, Jeremy, Max Krohn, and Dan Aguayo (2006). SCIgen – An Automatic CS Paper Generator. Cambridge, MA: Computer Science and Artificial Intelligence Laboratory, MIT. <http://pdos.csail.mit.edu/scigen/>. Accessed May 8, 2006.

TechSmith (2006). Morae Usability Testing for Software and Web Sites. Okemos, MI. <http://www.tech-smith.com/morae.asp>. Accessed May 8, 2006.

Watt, Robert (2004). Concordance. Version 3.2. <http://www.concordancesoftware.co.uk/>. Accessed May 8, 2006.

Wiener, Norbert (1948). Cybernetics or Control and Communication in the Animal and the Machine, 2nd edn. Cambridge, MA: MIT Press, 1961.

Wiener, Norbert (1950). The Human Use of Human Beings: Cybernetics and Society. New York: Hearst, 1967.


previous chapter
Ian Lancashire
Cybertextuality and Philology
next chapter