“A 'New' Computer-Assisted Literary Criticism?”
Raymond
G.
Siemens
Malaspina University-College, Canada
Tamise
J.
Van Pelt
Idaho State University, USA
William
Glen
Winder
University of British Columbia, Canada
Dene
M.
Grigar
Texas Woman's University, USA
Susan
Schreibman
University College Dublin, Eire
Chair: Raymond G. Siemens
Electric Theory (Truth, Use, and Method)
Tamise J. Van Pelt
First Wave critics of the electronic environment - especially those
critics who discuss hypertext in the late '80s and early '90s - make
an interesting discovery about theory and computing: electronic
literacy confirms post/structural theories of reading and writing.
Given a computer environment, theory finds that its principles are
indeed true.
Defining semiosis as the reading of codes and the writing of signs,
J. David Bolter (1991) argues that "the theory of semiotics becomes
obvious, almost trivially true, in the computer medium" (196).
Similarly, George P. Landow (1992) extends Bolter's claim to
encompass the theories of Barthes, Foucault, Bakhtin, and Derrida,
pointing to the bilocation of ideas of textual openness, of the
network, of polyvocality, and of decenteredness both in
post-structural theory and computerized hypertextual writing. Thus,
Landow concludes: "something that Derrida and other critical
theorists describe as part of a seemingly extravagant claim about
language turns out precisely to describe the new economy of reading
and writing with electronic virtual, rather than physical, forms"
(8). Even the hypertext novel can be an exemplar of literary
criticism; for instance, Michael Joyce's Afternoon embodies both the
content and the practice of psychoanalytic theory, using resistance
as a literal compositional principle and placing desire at the
center of the novel's reader-text interface.
This amazing convergence between post/structural theory and the
computing medium suggests that hypertext's ability to stage the
principles informing diverse (and even contradictory) literary
theories defines a property of the electronic medium itself. Since
the computer environment offers theory a self-validating medium, the
question that once determined the authority of a theory - can its
principles help the reader discover the truth of a text? - seems
futile to ask. If Dillon is right and our technologies are actually
embodiments of our theories (in Rouet, 8, emphasis mine), then this
self-validating function of electrified theory undermines, by
inevitable affirmation, the critical position of the
reader-theorist. Because the electronic medium reconfigures the
relation between the reader and the text, it literalizes the way
that the reader brings theory to the text: In a medium where "the
text is a stage and reading is direction" (Douglas, title), the very
agency of the "reader" tends to strip away rather than develop
critical distance. Thus, Espen Aarseth replaces the idea of the
"reader" with the more ambivalent term "user" to describe the person
who interacts with the electronic medium. User, Aarseth writes,
"suggest[s] both active participation and dependency, a figure under
the influence of some kind of pleasure-giving system" (Cybertext
174). Since a user-theorist of e-text operating under the print
assumptions connecting theory to authoritative truth finds in
electric theory the headiness of addiction to confirmation, electric
theory necessitates attention to use itself.
To better define issues of user dependency and control in the
convergence of theory with the electronic medium, I will examine two
very different methods of computerized theoretical study that solve
the problem of literalness: Earl Jackson, Jr.'s semiotics and
psychoanalysis website and Havholm & Stewart's computer modeled
structural narratology. Both methods arise from the computing
environment and would be impossible without it. Whereas the former
site places the user of electric theory in a webbed environment of
emergent meanings and open-ended exploration, the latter practice
programs theoretical principles in order to radically constrain
theoretical outcomes. Either method's willingness to replace
truth-value with use-value provides a way out of the self-confirming
impasse created by the encounter between First Wave theoretical
assumptions and the computer medium.
French Neo-Structuralist Schools and Industrial Text Analysis
William Glen Winder
Parallel to, and to some degree in reaction to French
post-structuralist theorization (as championed by Derrida, Foucault,
and Lacan, among others) is a French "neo" structuralism built
directly on the achievements of structuralism using electronic
means. We will begin this talk by examining some exemplary
approaches to text analysis in this neo-structuralist vein that have
appeared over the past 10 years.
Some of these approaches have specific "deliverables" and are
promising because of the well-defined focus of their research:
Sator's topoi dictionary, E. Brunet's statistical software, and E.
Brill's grammatical tagger will serve to illustrate projects of this
type. Other research is more theoretical in nature and represents
over-arching models of (electronic) textual study. Two examples we
will consider are Jean-Claude Gardin's expert systems approach and
François Rastier's interpretative semantics.
These practical and theoretical approaches have in common a
fundamental hypothesis: archives of natural language texts are a
valuable and as yet untapped resource for any project to formalise
human understanding, whether that project be industrial or
traditionally humanistic in nature. (Thus, for example, the Brill
tagger uses the Frantext literary database to generate the rule base
for the tagger.) In a very real and practical sense, authors are
painstaking programmers. They formalise meaning and create, through
their writing, databases of expertise in various domains, which
range from how we use language to how we perceive the world and
exchange information about it. That expertise is precisely what
computers must acquire if they are to perform the more advanced
tasks increasingly asked of them, whether in the context of
humanities research or in an industrial setting.
Textual archives which combine texts and expertise are destined to
play an important role in our increasingly electronic society
because programmers face an information barrier. Advanced
programming projects require that programmers describe real-world
objects with exponentially increasing detail and precision. Such
massive requirements for description cannot be met by the efforts of
any single group of programmers: it may well be that only the mass
of textual material, accumulated over the centuries in literary
texts and scientific writing, has enough descriptive weight to allow
programs to break the information barrier and perform qualitatively
more advanced tasks.
Textual research itself faces the same kind of information barrier.
In this paper, we will consider how this "Wissenschaft" accumulation
of expertise is related to and complements the neo-structuralist
approach. Ultimately, electronic critical studies will be defined by
their strategic position at the intersection of the two technologies
shaping our society: the new information processing technology of
computers and the representational techniques that have accumulated
for centuries in written texts. Understanding how these two
information management paradigms complement each other is a key
issue for the humanities, for computer science, and vital to
industry, even beyond the narrow domain of the language industries.
It will be the contention of this paper that the direction of
critical studies, a small planet long orbiting in only rarefied
academic circles, will be radically altered by the sheer size of the
economic stakes implied by industrial text analysis.
A Theory for Literature (Created for the World Wide Web, E-Mail, Chat Spaces, Databases, and Other Electronic Technologies)
Dene M. Grigar
Beginning with Michael Joyce's seminal hypertext, 'afternoon, a
story', and moving to the recent webtext by Kathleen Yancey and
Michael Spooner, 'Not (Necessarily) a Cosmic Convergence', Dene
Grigar provides examples of the new literary writing generated by
electronic non-print technologies, such as World Wide Web, MOOs,
databases, and other types of computer-generated media, and
discusses the theories that have emerged to explain them.
Specifically, she looks at theories of hypertext, posited by Jay
David Bolter, George Landow, Johndan Johnson-Eilola; of synchronous
or real-time writing, found in MOOs and MUDs, developed by Cynthia
Haynes and Jan Rune Holmevik, Mick Doherty, and Sandye Thompson; and
of online writing and webtexts, articulated by John Barber, Victor
Vitanza, and others. Although electronic writing remains at the
early stages of development in this late age of print (Bolter 2) and
early age of electronic writing (Barber and Grigar 12), it is fast
becoming an important medium of literary expression. By bringing
these examples and ideas together, the author suggests some
guidelines for understanding and discussing electronic writing that
will serve as the starting point for the development of an
overarching literary theory for these emergent literary texts.
Computer-Mediated Discourse, Reception Theory, and Versioning
Susan Schreibman
This paper will address how computer-mediated discourse provides new
opportunities and challenges in two areas of literary criticism,
Reception Theory and Versioning. Although extremely different
critical modes, they can be viewed as belonging to opposite ends of
the space-time continuum, with Versioning taking advantage of the
computer's ability to enhance our understanding of literature
through space, and Reception Theory through time.
Versioning is a relatively new development in the area of textual
criticism. Since the end of the Second World War, the basic theory
under which most textual critics operated was to provide readers
with a text that most closely mirrored authorial intention. This
philosophy of editing produced texts which, by and large, never
existed in the author's lifetime. They were eclectic texts: the
editor, armed with his intimate knowledge of the author and the
text, assumed the role of author-surrogate to create a text which
mirrored final authorial intention. To do this the editor swept away
corruption which had entered the text through the publication
process by well-meaning editors, compositors, wives, heirs, etc. He
also swept away any ambiguity left by the author herself. Thus, in
the case of narrative, choosing a bit here from the copy text, a bit
there from the first English edition, a sentence there from the
second American edition, and a few lines from the original
manuscript, that elusive but canonical authorial intention could be
restored. In the case of poetry, editors were forced to choose one
published version of a text over another, and substantively ignored
authorial ambiguity, such as Emily Dickinson's, who left in many of
her poems alternative readings of certain words.
By the mid-1980s, the monolithic approach to textual editing began to
lose favour with a new generation of textual critics, such as Jerome
McGann and Peter Shillingsburg, who, responding to new critical
discourses, including Reception Theory, viewed the text as a
product, not of corruption, but of social interaction between
several of any number of agents: author, editor, publisher,
compositor, scribe, translator. It was also recognised that
authorial intention was often a fluid state; particularly in the
case of poetry it was possible to have several "definitive" versions
of any one work which represented the wishes of the poet at a
particular point of time.
One reason, I would argue, that newer theories of textual criticism
took so long to be developed was that until the advent of the HTML,
the World Wide Web, and the spatial freedom of the Internet, textual
critics had no suitable medium to display a fluid concept of
authorship. Any attempts to demonstrate anything but a monolithic
text which represented final authorial intention was doomed to
failure. As early as 1968 William H. Gillman, et al. undertook what
was to be a definitive edition of Emerson's Journals and Notebooks in six volumes. Lewis Mumford
writing in the New York Review of Books put
paid to this editorial method with his review article entitled
"Emerson Behind Barbed Wire":
“The cost of this scholarly donation is painfully dear,
even if one puts aside the price in dollars of this heavy
make-weight of unreadable print. For the editors have chosen
to satisfy their standard of exactitude in transcription by
a process of ruthless typographic mutilation.”
In 1984 Hans Walter Gabler's Synoptic edition of Ulysses encountered the same resistance from both the
editing community and the Joyceans. These early efforts at
representing the fluidity of authorship were doomed to failure
because of the two-dimensionality of the printed text. 'Reading'
such texts as Gillman's and Gabler's became impossible for all the
arrows, dashes, crosses, single underlining, double underlining,
footnotes, endnotes and asterisks. I would thus argue that the
theoretical stance to present anything but a monolithic text
representing someone's final intention (which was more likely than
not the editor's) was a product as much of the medium as of
concurrent theoretical modes, eg New Criticism.
Hypertext has the spatial richness to overcome the limitations of the
book's two-dimensionality to present, not only works in progress,
but the richness and ambiguity of authorial intention. No longer do
editors have to choose between the three Marianne Moore poems
entitled "Poetry", but all three can be accommodated within the
hypertextual archive. Furthermore, all of Moore's revisions can also
be displayed. Depending on the skill, expertise and needs of the
user, a single monolithic "Poetry" can be displayed (possibly for a
secondary school class), two versions, or even all three versions
could be viewed (for a Freshman poetry course), or all three
versions and several of the manuscript drafts (for a graduate-level
course on research methods). By utilising a markup language such as
SGML, all these texts could be encoded to create an "Ur" version of
the poem in which lines across versions can be displayed and
compared.
No longer does space and the cost of publishing images in traditional
formats have to guide edited editions. Facsimile versions which were
only produced for the most canonical of authors now can be produced
and "published" at extremely low cost. Furthermore, languages such
as SGML or XML facilitate the linking of image and text files,
annotation, notes, links to other relevant texts, and so on. As with
my previous example, all or some of this apparatus can be turned on
depending on the needs of the audience.
The needs of the audience brings me to my second theoretical mode
than can find richer expression in digitisation, Reception Theory.
Reception Theory seeks to provide present-day readers with a
snapshot of a text's history across time, and how previous
generations of reader-response have gone into shaping our conception
of the text. It also seeks to make available to present-day
audiences the historical, psychological, social and/or semantic
codes of the text as it was received at some point in the past. If,
meaning of a work is created in the interaction between text and
reader, hypermedia has the potential to create a three-legged stool
in which present-day readers overhear the dialogue created between
past readers and the text.
As with my previous example, no longer does the cost associated with
the production of printed material have to be the prime
consideration in constructing a reception theory text. As it is now
economically feasible to produce facsimile archives for un-canonical
authors, reception theory archives can be constructed across time
providing the reader access to objects that would have been
unthinkable only a generation ago. For example, take the
construction of a reception theory archive of W. B. Yeats's poetry.
No longer does the author of the text have to content herself with
providing one or two black and white reproductions of the first
Cuala Press editions of Yeats's early poetry to demonstrate the
semantic codes embedded in those early editions. In a digital
edition, it is possible to include full colour shots of these texts,
in addition to the Macmillan editions, which, as a standard trade
publication, are stripped of those codes. Furthermore, the early
reception of these poems was no doubt influenced by other objects of
the Arts and Crafts movement in Ireland and England. These objects,
prints, paintings, even wallpaper, could be digitised to create a
lexia of non-textual meaning. As with my previous example on
Versioning, it is possible to conceive a digital archive that could
serve many audiences, with features turned on or off as
necessary.
Hypertext can provide a vehicle for accessing the ways in which
readers realised the aesthetic interpretation of the text if those
readers left a record of that experience; that trail can be textual:
critical (reviews, articles, critical texts), personal (letters,
diaries), creative (the re-writing of a creative work). It can also
involve other media, a painting or ballet based on a poem, myth or
folktale. These acts of interpretation can be presented to present
day users in much the same format as we are used to in a
two-dimensional article, i.e., a block of text with hyperlinks
(rather than footnotes) to relevant primary sources.
On the other hand, we have not yet realised a form of critical
discourse which does not mirror the two-dimensional spaces we have
used for the last 500 years to express ideas. Hypertext criticism in
future will, no doubt, embody a new form for critical discourse
which is shaped by the new medium. Criticism which takes advantage
of the three-dimensional space of the computer is so new that the
various paradigms of editorial/authorial intervention have not been
fully realised, no less understood by both the creators and users. A
case in point is that we still do not have appropriate language to
describe these new objects of computer-mediated critical discourse:
terms like "article", "book", "collection of essays", "review" etc.
have meaning appropriate to the printed word, but possibly not to
the digitised one.
And indeed there are costs associated with this new medium which will
govern, to a large extent, the scope and content of archives:
copyright costs, the production of machine-readable versions of
texts, the cost of digitising images, hardware, software, etc. One's
ideal archive is governed by a balancing of costs, and tradeoffs,
such as deeper encoding vs encoding of a greater number of texts,
will play a significant part in the creation of resources.
In addition, the searching, retrieval and display of objects in a
digital archive will, by default, reflect the bias of both the
editor/author and the system designer. Yet, unlike critical theory
presented in a two-dimensional space, much of the bias will be
invisible to the user, as it will be buried, for example, in SGML or
XML encoding. Thus there will be new challenges in learning to
"read" this new model of literary discourse, which will, in turn, no
doubt, foster new theoretical modes.