Cognitive Stylistics and the Literary Imagination
Cognitive Stylistics analyzes an author's idiolect, his individual language traits. Although cognitive psychology and neuroscience do not know how the human mind works, they have detected, through experiments on how people behave (not through personal testimony about that behavior), features of mental behavior that are consistent with a standard theory or model. That cautious experimentation uses recall and recognition tests, and EEG (electroencephalography), PET (positron emission tomography), fMRI (functional magnetic resonance imaging), and other scans. With them, scientists are painstakingly uncovering steps, and constraints in taking these steps, that characterize how we mentally create and utter sentences (no word in English covers both oral and written expressing, but perhaps "uttering" will do). The current standard model describes two language processes: an unself-conscious creative process, veiled and almost unknowable, named by older writers the Muse (Jacoby 1971); and a conscious analytical procedure by which all authors assemble and revise sentences mentally.
These sciences are enhancing our understanding of how authors create both oral and written texts. New knowledge about language processing in the brain helps us interpret data from traditional computer text analysis, not because the mind necessarily works algorithmically, like a computer program, but because quantitative word studies reveal auditory networks, and the cognitive model asserts that the brain operates as a massively distributed group of such networks. Cognitive Stylistics underscores how every utterance is stamped with signs of its originator, and with the date of its inception: these traits are not unique, like fingerprints, but, taken together, they amount to sufficiently distinctive configurations to be useful in authorship attribution or text analysis. Cognitive Stylistics determines what those traces may be, using concordances and frequency lists of repeated phenomena and collocations: together, partially, these conceivably map long-term associ-ational memories in the author's mind at the time it uttered the text. Such clusters of repetitions arise from built-in cognitive constraints on, and opportunities in, how we generate language. The length of fixed phrases, and the complexity of clusters of those collocations, are important quantitative traits of individuals.
Experimental cognitive psychology restores the human author to texts and endows computer-based stylistics with enhanced explanatory power. The potentially distinguishable authorship traits to which it draws attention include the size of the author's personal working memory (Just and Carpenter 1992). The cognitive model shows that, far from being dead, traces of the author do remain in the work, just as Shakespeare vowed it would in his sonnets, long after his own personal death. Collocational clusters also change over time as an individual's memory does. As the implications of cognitive research become understood, text-analysis systems may change, and with them stylistics. Texts in languages whose spelling departs widely from its sounding will likely be routinely translated into a phonological alphabet, before processing, as researchers recognize the primacy of the auditory in mental language processing. Because the standard cognitive model breaks down the old literary distinction between what is said and how it is said, interpretation and stylistics will come together.
A paradox underlies stylistic research. Most of us do not know much about how we make and utter text, simply because we are so expert at uttering. The more we make texts, the less we have time to be self-conscious about doing so. Committing ourselves completely to the action as it unfolds, we no longer attend to how it takes place.
Composing for anyone whose style is likely to be subject of analysis resembles walking, cycling or even, sometimes, driving a car on a highway. We can almost entirely devote our minds to other things and yet execute these actions properly. Kosslyn and Koenig say the more we know something, the harder it is to declare how we do it: "when one becomes an expert in any domain, one often cannot report how one performs the task. Much, if not most, of the information in memory cannot be directly accessed and communicated" (1992: 373). Recently one of my students, a professional actor, described an unnerving experience she had midway through a theatrical run of Henrik Ibsen's Enemy of the People. As she stood, waiting to go onstage, she could not remember anything of what she was to say or do when performing her part. Yet once she walked onstage, she executed everything perfectly just in time. She then understood why many actors repeatedly experience dread and sickness just before going on. So ingrained has the experience of performing become, over many weeks, that they can no longer summon the words or even the actions to conscious memory. Ironically, even if these actors were to succumb to amnesia, they still would not lose their roles. These have just become inaccessible and so are in effect protected against loss. Amnesiacs may forget who they are, but they can all the same speak and write (Squire 1987: 161, 171; Shimamura et al. 1992). Ironically, the more we do something, the less we can attend to how we do it. This well-observed cognitive effect shows how frightening and even disabling can neurological processes developed to ensure reliable performance of an essential skill be in practice.
Technology partially conceals this actual neglect from anyone who composes directly onto an artificial memory device. Equipped with a pen, a typewriter, or digital editing tools, authors see their text unfolding from their minds as they manually encode it in alphanumeric symbols on screen or paper. Making sentences visibly explicit as composed, writers no longer worry about having to store mentally what they create. Externalized, the words, phrases, and sentences of utterances can be easily deleted, rearranged, transformed grammatically, and replaced. We fully attend to the analytic task of doing these things. Because we externalize subvocal or inner speech immediately in visual form, we feel totally conscious of the mental activity of composing. Yet all we experience is the storage and the manipulation of symbols manifested outside the mind. What happens within the mind remains dark, especially to the expert composer, although novices learning how to use a language may assemble, with painful slowness, utterances in memory, consciously, before they utter them. The inexpert attend completely to the task. They may even be able to describe what steps they take consciously, probing their memory, and editing the results mentally in advance of speaking it. Any native speaker can follow this method in preparing to utter a sentence in their own tongue. It is so slow in natural conversation and composition as to be seldom worth using.
Expert authors in many languages often mention this neglect of attention to how they compose. Although they do not theorize their inability to see how they create sentences, their cumulative testimony is convincing. Being unable to see themselves create text, they characterize the process itself as beyond comprehension. Jean Cocteau explains: "I feel myself inhabited by a force or being – very little known to me. it gives the orders; I follow" (Plimpton 1989: 106). Elizabeth Hardwick agrees: "I'm not sure I understand the process of writing. There is, I'm sure, something strange about imaginative concentration. The brain slowly begins to function in a different way, to make mysterious connections" (Plimpton 1989: 113). Fay Weldon candidly admits: "Since I am writing largely out of my own unconscious much of the time I barely know at the beginning what I am going to say" (Winter 1978: 42). Cynthia Ozick imagines language coming out of disembodied nothingness.
I find when I write I am disembodied. I have no being. Sometimes I'm entranced in the sense of being in a trance, a condition that speaks, I think, for other writers as well. Sometimes I discover that I'm actually clawing the air looking for a handhold. The clawing can be for an idea, for a word. It can be reaching for the use of language, it can be reaching for the solution to something that's happening on the page, wresting out of nothingness what will happen next. But it's all disembodied…. I fear falling short. I probably also fear entering that other world; the struggle on the threshold of that disembodied state is pure terror.(Wachtel 1993: 16)
John Hersey compares composing to "'something like dreaming' … I don't know how to draw the line between the conscious management of what you're doing and this state" (Plimpton 1988: 126). Jonathan Raban thinks of himself as taking down spoken text, as if in dictation, uttered by an "it" – the pronoun used by Cocteau – rather than emerging from his own consciousness.
All writers are in some sense secretaries to their own books, which emerge by a process of dictation. You start the thing off and on the first few pages you're in control, but if the book has any real life of its own, it begins to take control, it begins to demand certain things of you, which you may or may not live up to, and it imposes shapes and patterns on you; it calls forth the quality of experience it needs. Or that's what you hope happens. I don't just sit, making conscious decisions externally about how much of my experience I am going to use.(Wachtel 1993: 120)
Earlier writers name the voice dictating the text as the Muse, a word for memory. Gore Vidal describes this unknown as sounding words aloud in the mind and goes further than others in admitting how ignorant he is of where this voice and its language come from.
I never know what is coming next. The phrase that sounds in the head changes when it appears on the page. Then I start probing it with a pen, finding new meanings. Sometimes I burst out laughing at what is happening as I twist and turn sentences. Strange business, all in all. One never gets to the end of it. That's why I go on, I suppose. To see what the next sentences I write will be.(Plimpton 1989: 63)
Amy Lowell uses the word "voice" and says, like Ozick, that it comes from something disembodied, from no body.
I do not hear a voice, but I do hear words pronounced, only the pronouncing is toneless. The words seem to be pronounced in my head, but with nobody speaking them.(Ghiselin 1952: 110)
These very different authors all tried to watch how sentences came from their minds and gave up. They had to attribute their composition to a disembodied voice or dream that came out of nothing. In editing on the page or screen, they consciously delete and rearrange words within passages, adjust syntactic structure, re-sequence parts, and make word-substitutions, just as we all do. They do not locate the tedious and conscious sentence-building in short-term memory as the origin of genuine composition. Literary theorists agree. Colin Martindale characterizes "primary-process cognition" as "free-associative … autistic … the thought of dreams and reveries", unlike the mind's problem-solving capability, "secondary-process cognition" (1990: 56). Mark Turner states that "All but the smallest fraction of our thought and our labor in performing acts of language and literature is unconscious and automatic" (1991: 39).
Experiments in cognitive psychology and neuroscience have supported the concept of a disembodied mental voice that "utters" sentences without exhibiting how they were put together.
All verbal utterances, written or oral, are received by the brain, and uttered by it, in an auditory-encoded form. Philip Lieberman explains that, during language processing, we access "words from the brain's dictionary through their sound pattern" (2000: 6, 62). Further, we internally model whatever we expect to hear separately from what we are hearing. That is, we can only understand heard speech by modeling silently the articulatory actions necessary to produce it. Lieberman explains that
a listener uses a special process, a "speech mode", to perceive speech. The incoming speech signal is hypothetically interpreted by neurally modeling the sequence of articulatory gestures that produces the best match against the incoming signal. The internal "articula-tory" representation is the linguistic construct. In other words, we perceive speech by subvocally modeling speech, without producing any overt articulatory movements.(2000: 48)
The "McGurk" effect shows that our internal model for the utterance we are decoding, for various reasons, may differ significantly from what comes to us as auditory speech.
The effect is apparent when a subject views a motion picture or video of the face of a person saying the sound [ga] while listening to the sound [ba] synchronized to start when the lips of the speaker depicted open. The sound that the listener "hears" is neither [ba] nor [ga]. The conflicting visually-conveyed labial place-of-articulation cue and the auditory velar place of articulation cue yield the percept of the intermediate alveolar [da]. The tape-recorded stimulus is immediately heard as a [ba] when the subject doesn't look at the visual display.(McGurk and MacDonald 1976, cited by Lieberman 2000: 57)
If sound alone were responsible for what was heard, the McGurk subject would hear [ba] at all times. Because it attends to visual clues as well, the mind hears something never sounded, [da]. Only if the subject's mind manufactured speech sounds internally, drawing on all sensory evidence available, could its model differ from both auditory and visual clues individually.
Authors who describe their inner mental activity when creating and uttering sentences are indeed correct when they characterize the "voice" they hear as bodiless. Even when we listen to the speech of others, it is not the speech of those others that we hear. It is the brain's own construct of those voices. Obviously, different brains might well perceive very different sounds, and thus words, from the same speech sounds heard from others. The mind becomes a reader of things made by a mental process that manifests itself as a bodiless, at times strange, voice.
Cognitive sciences confirm authors' impressionistic descriptions of the language-making process as blank and inaccessible. The inner voice utters sentences that appear to come out of nowhere. That we hear a voice suggests we are listening to someone else, not ourselves but someone nameless, unrecognizable, and above all as distant from analysis as the mind of someone whose words we hear on a radio. We invoke such images for a form of memory of how to do something, creating utterances from natural language, where we lack the means to identify the remembering process with ourselves. That process exemplifies the expert mind failing to attend to what it is doing. During composition, we cannot correct this neglect, as we can when driving a car and suddenly wake up to a recognition that we have been driving on automatic, unattended, for some miles. We cannot will consciousness of how our minds create utterances. Making them relies on what is termed procedural or implicit memory, in which what is recalled (that is, how to utter something) is remembered only in the act of doing it. When we try to recollect something stored implicitly, we execute the stored procedure. The mind has never created a readable "manual" of the steps whereby it creates sentences. The only exceptions are those halting, deliberate activities in our short-term memory in which, as if on paper, we assemble an utterance, but of course this method too, in the end, relies on the same mysterious voice or Muse, what cognitive sciences associate with implicit memory, to get it going. That truism, "How can we know what we are going to utter until we have uttered it?", characterizes the uncertainty of waiting for an utterance to be set down on the page or screen, to be spoken aloud, and to enter working memory. In all these situations, a silent inner voice precipitates the text out of nowhere.
Twenty-five years ago, Louis Milic distinguished between what writers do unconsciously in generating language (their stylistic options) and what they do consciously in "scanning, that is, evaluation of what has been generated" (their rhetorical options; 1971: 85). Milic anticipated the distinction between implicit or procedural and explicit memory by several years (Squire 1987: 160). By insisting on the primary role of empirical, rather than of theoretical or impressionistic, evidence in the study of authorship, Milic directed the discipline of stylistics to the cognitive sciences.
Why cannot we recall how we make a sentence? Why is the mind blocked by implicit memory in understanding one of the most critical defining features of a human being? The answer seems to lie in what we make memories of. Our long-term memory maker, located in the hippocampus, can store language, images, sounds, sensations, ideas, and feelings, but not neural procedures. Biologically, we appear to have no use for recalling, explicitly, activities by the language-processing centers themselves. Our minds, as they develop, have no given names for the actors and the events at such centers. Such knowledge is not forbidden. It is likely that it is unnecessary to and possibly counterproductive for our survival.
The Cognitive Model
So far, cognitive sciences have been shown to make two critical contributions to the analysis of style: it must be analyzed as auditory, and it emerges from neural procedures to which we cannot attend. Much else can be learned, however, by reading the scientific literature, both general studies (Kosslyn and Koenig 1992; Eysenck and Keane 1990; and Lieberman 2000), and analyses of specific areas like memory (Baddeley 1990; Squire 1987). Recent scientific results, on which these books are based, appear as articles in journals such as Brain, Brain and Language, Cognitive Psychology, The Journal of Neuroscience, The Journal of Verbal Learning and Verbal Behavior, Memory and Cognition, Nature, Neuropsychologia, Psychological Review, and Science. These papers are accessible, but to work with them the intelligent reader must be grounded in the cognitive model of language processing. Because it is changing now, and will continue to do so, work in cognitive stylistics will need steady reassessment. However, the method of cognitive stylistics, which bases text-stylistics on cognitive effects that experimentation has found to illuminate the mind's style, will remain. Here follows a brief summary of the emerging model. It breaks down into two parts: memory systems, and neural processes.
Scientists now recognize three basic kinds of human memory: (1) short-term memory, now known as working memory; and long-term associative memory, which falls into two kinds, (2) implicit or inaccessible, and (3) explicit or accessible. Implicit long-term memory includes recall of a procedure in the action, and priming. Explicit long-term memory includes episodic memory and semantic memory.
Working memory offers short-term storage of a limited amount of language so that it can be consciously worked on. This form of memory cannot be separated from processing activities. Alan Baddeley first proposed twenty years ago a very influential model of working memory split into three parts: a central executive and two subsystems, a visual area and a phonological or articulatory loop. The executive, which manages tasks in the two subsystems, has been localized in the dorsolateral prefrontal cortex (Lieberman 2000: 77). All conscious mental work on language gets done in the articulatory loop. It encompasses many regions in the brain, including the well-known language centers, Wernicke's and Broca's areas, which handle, respectively, semantic and syntactic processing. (When damaged, Wernicke's area leads an individual to utter well-formed nonsense, "word salad", and Broca's area to utter a form of agrammatism, characterized by sentence fragments, understandable but unformed.)
Central to the mind's conscious fashioning of language is the subsystem Baddeley calls the articulatory loop. So called because we must recirculate or rehearse a piece of language in order to keep working on it, this loop has a limited capacity, identified by Miller in 1956 as "seven, plus or minus two." Experiments have for decades confirmed these limits and show that individuals do not always reach the maximum potential capacity. The so-called reading-span test asks individuals to remember the final words of a sequence of unrelated sentences. Test results show a range of from 2 to 5.5 final words (Just and Carpenter 1992). As early as 1975 experiments showed that we could store in working memory, for recall, only as many words as we could utter aloud in two seconds (Baddeley et al. 1975). The number of such words declined as the total number of syllables increased, in what was termed "the word-length effect." Other experiments elicit so-called "effects" in subjects that confirm the auditory nature of working memory of language, and its severe capacity limits. The "acoustic similarity effect" shows that the ability of an individual to recollect a sequence of unrelated words suffers if they sound alike: semantic relations, or lack of them, and dissimilarity in sound have no effect. If working memory used the images of words, as text, the acoustic character of a word would not affect manipulation in working memory. The "articulatory suppression" effect also testifies to the auditory nature of language as consciously worked in memory. Individuals having to repeat aloud, continuously, a single sound or term or number (say, a function word such as "with") cannot rehearse, subvocally, utterances and so put or keep them in working memory. Auditory language immediately, unpreventably, enters it. Other experiments reveal that syntactically challenging sentences, such as those with clauses embedded within them centrally, reduce language capacity in working memory. Recently, Philip Lieberman asserts that we maintain "words in verbal working memory by means of a rehearsal mechanism (silent speech) in which words are internally modeled by the neural mechanisms that regulate the production of speech or manual signs" (2000: 6).
What can be learned about the mind's style from the articulatory loop? Its capacity constraints hamstring conscious mental work on making continuous sentences. It is little wonder that we seldom mentally assemble or attend to editing what we are going to say before we utter it. We have perhaps not had artificial storage devices (e.g., paper, computers), where it is very easy to edit texts, long enough noticeably to atrophy our already limited working memory. However, we have supplemented that memory, for language manipulation, with external storage devices. Increasingly, texts will depart from the mind's constraints as we assemble sentences that greatly exceed the length and complexity of ones that can be attended to by the unassisted mind. This extension has two clear effects. First, it produces utterances that the human mind cannot consciously assimilate into working memory for analysis. This, like the McGurk effect, will cause the mind to work-around the problem and perhaps, in doing so, to remodel the ingested utterance in ways that distort it. Second, the very experience of total control over utterances that artificial storage devices give makes all the more unbearable our mental blindness to the generation of utterances. No one can use software to create sentences, outside of playful programs like Joseph Weizenbaum's Eliza and the Postmodernism Generator of <http://www.elsewhere.com. Authors affirm that the puzzling, to some frighteningly blank, inner voice which early writers called the Muse exists.
No matter whether we utter sentences as oral speech, or write them onto paper or into a file, we use one of four methods. We can spontaneously compose and utter without conscious thought or foresight, as during free conversation, when our auditory voice overlays the subvocal inner muse, or during rapid typing or writing, when our hands scarcely keep up with the dictation from within. Second, we can recite, rehearse by rote, from long-term explicit memory something that we laid down in that. The term "explicit" is a little misleading because our long-term memory is never apparent, as if it were a landscape, but resembles a black ocean in which we cast lines. In long-term memory is knowledge of the world and facts, including information (so-called semantic memory), and personal experience (so-called episodic memory; Tulving 1983). If we catch something, it suddenly appears in working memory, uttered by a subvocal voice if what we retrieve is language, and then we can recite (respeak) that voice aloud. Third, we can script sentences in working memory and utter them deliberately from there. This making process draws not only on long-term memory but consciously on cognitive powers like emotion and reason. Last, we can join our inner subvocal voice in working memory to our eyes and to external artificial memory devices, such as paper and computer displays, in order to compose, apparently "outside ourselves", although relying on cognitive resources of which we are aware.
What we call long-term or associative memory is still not well understood. How memories are stored in it and are retrieved from it, like what is stored there and why, emotionally, we embed it and withdraw it from memory at all, characterizes what authors term the Muse. We find something in long-term memory by using working memory to think of words and things associated with what we are trying to recall. It is widely accepted that this activates links in our mental network so that the desired information, sometimes on the "tip-of-one's-tongue", pops out. We often link individual things not logically but fortuitously, according to how we have encountered them in experience, and intentionally to meet a need. A common phrase for this linking effect is "spreading activation" (Collins and Loftus 1975). To stimulate one memory appears to have a rippling effect on all memories linked to it. The strength of that activation, or its "weight", may be proportional to the number of times that the linkage between those two memories has previously been activated.
Long-term associative memory does not store, in one place, complete utterances. Our mind's procedural memory of how to create or recreate an utterance calls on many different parts of the brain simultaneously, that is, concurrently, and they operate until the very instant of utterance or subvocal voicing. The mind's thesaurus (concepts), lexicon (words), and encyclopedia (images of things) consist of "morphologically decomposed representations" (Koenig et al. 1992). Different systems responsible for phonemes, lexis, part-of-speech, syntax, letter-shapes, etc., all stored in different locations, work in parallel. Current research, for example, locates color words in the ventral temporal lobe, action words in the left temporal gyrus, names of people in the temporal pole, and words for animals and tools in, respectively, the anterior and posterior inferotemporal area (Martin et al. 1995: 102; Lieberman 2000: 63, 65; Ojemann 1991). The concept of a typical noun resembles an address list that itemizes the locations of separately stored traits or features. Words associated with concepts are kept separately and matched together by a mediating area of the brain (Damasio and Damasio 1992) termed a "convergence zone." The "combinatorial arrangements that build features into entities, and entities into events, i.e. their spatial and temporal coincidences, are recorded in separate neural ensembles, called convergence zones … [found in] association cortexes, limbic cortexes, and nonlimbic subcortical nuclei such as the basal ganglia… [where they form] hierarchical and heterarchical networks" (Damasio et al. 1990: 105). Convergence zones are keys to neural networks.
One type of long-term associative memory is priming. It is a wildcard in the formation of our memory store. Sensory experience lays down primes in the mind; we are never aware of and we do not attend to them. Kosslyn and Koenig describe how researchers read word-pairs to patients who had been anaesthetized for surgery. Later, when these patients were asked for the second, associated member of each word-pair, they replied with the formerly primed words more than with equally likely associated words (1992: 376). This effect is often termed repetition priming. A "prior exposure to a stimulus facilitates later processing of that stimulus" (1992: 374). Primes create sometimes unexpected links between an experience or an idea that we regard as common, and other things that would not ordinarily be associated with it. Even if everyone shared the same fuzzy definition of a simple concept, individual experiences impacting on us in the force of primes would subtly alter that already fuzzy definition. When we search long-term memory, we are intentionally, consciously, launching a prime-like probe. This type of prime always places semantic restrictions on retrieval. For instance, priming with the word "present" in the hope of raising memories related to the meaning "gift" will not elicit anything related to the meaning "now." When "primes are unattended, words related to either meaning appear to be facilitated" (Posner and Raichle 1997: 148–51). That is, when someone or some experience triggers long-term memory, what surfaces in equally unattended shape has all strings attached.
How does the mind use these associative networks to provide an utterance? That process remains elusive. Kosslyn and Koening say that the brain forces output of an utterance automatically "via a process of constraint satisfaction" (1992: 48, 268–69) in which what might be termed the best fit survives. This fit is to some pragmatic goal that meets the person's needs, however they may be said to exist. Emotions, desires, and purposes inform those needs. If cognition activates many brain sites in parallel, and if our vaguely sensed intentions determine what associative networks are selected to supply the semantic gist of what we will say, it is little wonder that we cannot describe how the Muse works. Working memory – the only mental place where we can consciously attend to language – is not big enough to hold this complex cascade of mental events and is also inherently unsuited to doing so. Mental processes are not images or sounds.
So-called experiments "in nature" (that is, patients with brain damage) and devices that image brain activity have at least identified some essential components of this sentence-making process. In the classical model of language brain function, Lichtheim (1885) and Geschwind (1970) proposed that two regions of the neocortex were responsible: the posterior Wernicke's area did semantic processing and sent language data to the frontal Broca's area, which clothed it with syntactic form and passed it on to the motor cortex for speaking. This model relied on ample medical evidence that patients with damage in Wernicke's area displayed faulty or nonsensical semantics and comprehension, and that those with damage in Broca's area revealed staccato, fragmented speech with agramma-tism. No one disputes this evidence from brain damage, but localizing function so simply is now impossible. Neural activity during linguistic processing turns out to be massively parallel and distributed. Language does not follow one path but many. Also, after damage to Broca's and Wernicke's areas, the brain can enlist "alternate neuroanatomical structures" for language use (Lieberman 2000: 5) and recover functionality. Lieberman and his colleagues have also recently shown that subcortical basal ganglia structures, in one of the most evolutionally ancient (reptilian) parts of the brain, help regulate language processing. As far as the brain is concerned, the watchword is indeed in the plural, location, location, location.
The Mind's Style
Louis Milic some decades ago argued that stylistics must abandon impressionism for quantitative measures. Since then, researchers who compiled numeric data about style and made such measures have been puzzled to explain how they illuminate literary works or the authors who made them. Cognitive stylistics asserts that literary texts do not have style; individual minds do, in uttering. It contends that individual styles are profoundly affected by the neural constraints surrounding mental language processes. Because minds can only be indirectly analyzed, stylistics as a discipline must do research at the interface of cognitive sciences and corpus linguistics. Cognitive psychology and neuroscience tell us what to expect. Corpus linguistics extracts quantitative features of texts that can be analyzed in terms of how they match what human sciences predict will be found.
How, then, do these sciences characterize language uttered by the disembodied sub-vocal voice long named the Muse? Keeping in mind that scientists base their models of how the mind works on experimentally discovered effects, and that cognitive stylistics is at an early stage, a clear profile of the mind's style is beginning to emerge. It is:
• auditory. Language utterances as stored, processed, and retrieved are phonological, not sequences of visual symbols, not alphabetic.
• lexico-syntactic. Grammar and vocabulary cannot be separated: that is, syntactic structure imposed by Broca's area, and semantic fields by Wernicke's area, simultaneously participate in a unified, parallel, non-sequential process.
• combinatory. The building blocks of any utterance are networks, what Damasio's convergence zones wire together. These blocks are not discrete words. The mind knows word-image-concept-sound combinations, not dictionary headwords and their explanations.
• built from two-second-long units. These combinations appear in repeated phrases or unfixed collocations that are not more than 5–9 units in length. This follows if, as scientists suspect, working memory constraints are associated with a deeper limitation existing at the level of neural networks. Indeed, one use of computer text analysis is to help determine the size and complexity of long-term-memory networks, how many things can converge on a convergence zone.
• biased to parataxis. Working memory is slowed when it must manipulate centrally embedded clauses. The mind, working towards a best fit in generating a sentence, may also well opt for simpler syntactical structures, unnested syntactic constructions, that is, paratactic sentences that take the form of a list of clauses linked by conjunctions.
• semantically indeterminate. No conventional thesaurus, encyclopedia, or dictionary can adequately document the individual mind's convergence zones, what may underlie semantic fields and associative clusters, simply because every individual's long-term store is affected by fortuitous associations, primes. The traits associated with concepts and words alter subtly as the weights that measure their binding strength change over time, affected by everything we directly experience through the senses. This partly explains the language effect known as lexical indeterminacy (Pilch 1988). Many words cannot be defined precisely enough to avoid misunderstandings. Individuals use words differently and only partially overlap with others.
• time-sensitive. As memory changes (or deteriorates), so do the characteristic traits of its utterances. Style is tied always to the health and age of the author's brain.
Other features of the mind's default style may be known and certainly will be discovered. These inbuilt traits are enough to initiate research.
Text-analysis algorithms and tools are available today to undertake rudimentary computer-based cognitive Stylistics research.
Analysis is complicated by the need for enriched texts. We cannot use orthographically spelled text alone because the brain uniformly recognizes, stores, retrieves, and operates in working memory on language as auditory data, "silent speech." Each word must be available in orthographic and phonemic forms, at least, and optimally be segmented into syllables, and tagged with morphological information. Organizations such as the Speech Assessment Methods Phonetic Alphabet (SAMPA, <http://www.phon.ucl.ac.uk/home/sampa/home.htm>) offer rigorous modern British and American alphabets. SIL International (<http://www.sil.org>) has useful databases and software for this enrichment. Some automatic phonetic conversion tools recommended by the Simplified Spelling Society (<http://www.spellingsociety.org>) use less rigorous alphabets. Phonetic conversion in general can only be done well by researchers who understand the sounds of a language. In early periods, before printing, spelling may represent word-sounds adequately for analysis.
Researchers have been able to locate repeated words and fixed phrases since the invention of the KWIC concordancer in the late 1950s. Software by computational linguistics exists to generate the repeating fixed phrases of texts, termed "n-grams" (Fletcher 2002). Techniques for generating unfixed repeating phrases, that is, collocations, has been slower to develop. Collgen, a free TACT program I developed in the early 1990s, works only with word-pairs, word-word collocations, in small corpora. Xtract (1993), by Smadja and McKeown, is unavailable, but Word Sketch by Kilgarriff and Tugwell looks promising. Every researcher faces the not inconsiderable task of defining collocation itself and selecting a statistical measure that ranks repeating collocations by their significance. Collins Wordbanks Online (<titania.cobuild.Collins.co.uk/wbinfo.php3>) offers interactive searching for word-combinations extracted from a 56-million-word Bank of English with two significance scores: "mutual information" and t-test. Berry-Rogghe (1973), Choueka et al. (1983), and Church and Hands (1990) offer various measures. Budanitsky and Hirst (2001) evaluate them. Significance rating conceivably recovers information about the strength of associativity among items stored in long-term memory. By processing concordance data, representations of such clusters can be built.
I am not aware of any software that generates repeating clusters of words, fixed phrases, and unfixed phrases (collocations), say, to a limit of seven units, plus or minus two, but computational algorithms written for large industrial applications in data mining might permit repurposing.
Some trial literary analyses tested whether traits in works by two English poets, Geoffrey Chaucer and Shakespeare, could be accounted for within the standard cognitive model of language processing. My studies used texts with normalized orthography, untranscribed phonetically because Middle and Early Modern English pronunciation is not yet sufficiently well understood. The software, Collgen, was limited to repeated fixed phrases and node-collocate pairs. Unfixed groups of three and more collocates were uncollected. These limitations aside, published results of the analyses tended to affirm that the styles of both authors might be cognitively based, and partly recoverable.
I studied two passages from Shakespeare's works, Hamlet, III. 1 (the so-called "nunnery scene"), and Troilus and Cressida, 1.3.1–29 (Agamemnon's first speech), and two parts of Chaucer's Canterbury Tales, the General Prologue and the Manciple's prologue and tale, both in the context of the complete Canterbury Tales. The principal repeated vocabulary unit of both authors was the word-combination. In The Canterbury Tales, Chaucer used 12,000 word-forms but 22,000 repeating fixed phrases. Over two periods, 1589–94 and 1597–1604, Shakespeare's different fixed phrases at least doubled his word types. The vocabulary of both poets consisted, not of single words, but of little networks, a fact consistent with associative long-term memory. The sizes of these networks were well within what working memory could accommodate. The 464 phrasal repetends appearing in both Chaucer's General Prologue and the rest of his Canterbury Tales averaged 2.45 words. They fell into 177 networks. Repeating fixed phrases in Shakespeare's texts in both periods averaged 2.5 words. Chaucer's largest repeating combination in the General Prologue (853 lines) had nine words. Shakespeare's largest in Hamlet III.l, under 200 lines long, had five words. A second Shakespeare analysis, of Agamemnon's speech in Troilus and Cressida. 1.3.1–29, found 107 phrasal repetends (repeating elsewhere in Shakespeare's works) in a passage that has only 159 different word-forms. Most combinations are two words in length, and the maximum has four. It is possible that the constraints of working memory affected the quantitative profile of the verbal networks employed by both men.
For both authors, I used text-graphs of these phrasal repetends to depict how they intersected. In the Chaucer study, three overlapping "say"-"tell"-"speak" graphs drew attention to Chaucer's unnoticed marking of the three verbs: they distinguished "between speaking (words), telling tales, and saying truths or sooths" (Lancashire 1992a: 349). Gary Shawver's doctoral thesis at the University of Toronto, "A Chaucerian Narratology: 'Storie' and 'Tale' in Chaucer's Narrative Practice" (1999), developed a finely detailed theory of Chaucer's narrative by taking this passing observation seriously. Two intersecting phrasal repetend graphs on the various word-forms for "true" and "love" in Hamlet also revealed a small network including the term "prove."
Other findings illustrate the variety of useful applications of cognitive Stylistics. Chaucer's phrasal repetends in the General Prologue that repeated somewhere in the rest of the tales were graphed against those tales. After taking into account their different sizes, a distribution showed that the General Prologue shared more repetends with a quite unrelated tale, by the Manciple, found always just preceding the last tale, by the Parson. One possible interpretation of these results is that Chaucer wrote the two works in the same year. The 107 phrasal repetends in Agamemnon's speech in Shakespeare's Troilus and Cressida served a different purpose. They explained why Shakespeare used an odd sequence of images in lines that critics thought ill-conceived. Passages from earlier poems and plays here documented associative linkages that are private.
Annotating texts for their repeating combinations conceivably finds traces of an author's long-term associative memory that make up his idiolect. Combinatorial analysis also assists in close reading. It puts into sharp relief the unrepeating words in those passages that may mark recent mental acquisitions. (Very long phrasal repetends enter the author's text manually, copied from other sources.) Analysis exposes some repetitions, notably syntactic strings, that may show that an author primes his mind by looking at the unfolding composition on page or screen and so moving the language into the artificial memory. The image of just-composed text can lead to re-use of the grammatical structures, that is, function words. Entering working memory as an image, then converted subvocally for the articulatory loop, writing stimulates variations on itself. Yet cognitive Stylistics is a young field. It must undertake hundreds of case studies to buttress these hypotheses. There are three special challenges.
We still have no firm model of repeating word-combinations. Fixed-phrase repetends (n-grams) and collocations (order-free collocate groups) are known to vary according to the span or the window, measured in words, within which they are measured. The tighter the window (e.g., five words), the smaller the length of the repeating repetends. If a window becomes as large as a page, it can contain two or more instances of the same fixed phrase. In that case, should the repetend be redefined as a cluster of that repeating phrase? We do not know how large such repeating clusters are allowed to get. If we increase the size of the window to the complete text, we then have only one complex cluster that turns out not to be a repetend after all because it never repeats. The window itself must be theorized. It could be set at the length of working memory, but mental associational networks may be larger. If eight words on either side of one member of a collocate pair were set as the span (i.e., the maximum capacity of the articulatory loop in working memory), what would we find if, after collecting all repeating collocates for a node word, we then treated those collocates themselves as nodes and went on to collect their collocates, and so on? At what point does the original node no longer reasonably associate with a distant collocate of one of its node-collocates?
We now have a reasonable vocabulary for describing the repeating fixed phrase and the cluster of a node and its collocates. Word-combinations that have reached their greatest phrasal length or collocate number are termed "maximals" (Altenberg and Eeg-Olofsson 1990: 8). Shorter, more frequent substrings or sub-collocate groups, which appear to function as kernels or attractors for other units, are called "subordinates"; and substrings of those subordinates, substrings that do not occur more frequently than the subordinate in which they appear, are termed "fragments" (Lancashire, 1992b). Kjellmer (1991: 112) uses the phrase "right-and-left predictive" for nodes that accumulate collocates to the right or the left. Sinclair (1991: 121) characterizes the frequency behavior of a node to be "upward" if its collocates occur more frequently than it, and "downward" if less frequently. (For example, nouns collocate upward with function words, and function words collocate downward with nouns.) Repetend features such as these affect our sense of the strength of association – often termed semantic distance – between parts of a fixed phrase or node-collocates cluster. So far, we calculate strength on the basis of frequencies (expected and actual) and mutual distance (measured in words), but size and other traits must be taken into account. For example, consider two node-collocate pairs, both separated by five words, and both sharing the same frequency profiles. (That is, both pairs share an actual frequency of co-occurrence that exceeds the expected frequency by the same amount.) Do they have different strengths of association if one of the pairs consists of two single words, and the other of two four-word fixed phrases? Or consider a repetend consisting of a single open-class word (e.g., noun, adjective, non-auxiliary verb, etc.), grammatically marked by a function word (e.g., article, preposition, etc.). Can we reasonably compare the strength of association there with that governing two open-class words that collocate freely and at some distance from one another? (In other words, can we treat grammatical and lexical collocations identically?) And how should we locate, name, and characterize words that have no significant repeating collocates and partner in no repeated phrases? These are words with a null "constructional tendency" (Kjellmer, 1994: ix). The more we know about combinatory repetends, the more we can bring to understanding the mind's style and even maybe our long-term memory. Texts follow from an individual's brain functions, but experimental sciences do not know how to use texts as evidence. Cognitive stylistics can assist.
The most common application of stylistics is authorship attribution. It locates marker-traits in an unattributed work that match selected traits of only one of the candidates. This method systematically neglects the overall stylistic profile of any one author. We need many more text-based studies of how a single author uses repeating word-combinations over time. Humanities researchers must also go beyond texts for their evidence. We can enhance our knowledge about composition, and its relationship to brain function, by undertaking experimental studies common in psychology and neuroscience. If the humanities expect the sciences to attend to text-based analysis, they must begin to undertake controlled experiments to analyze the linguistic behavior of living writers as they compose. Interviews and tests before, during, and after periods of composition can be combined with detailed capture of the author's keystrokes as the creative process takes place. The necessary research tools for these experiments are now widely applied to human-user interaction research.
In cognitive Stylistics, the humanities can take a leading role in profoundly important research straddling the medical sciences, corpus and computational linguistics, literary studies, and a community of living authors.
References for Further Reading
Altenberg, B. and M. Eeg-Olofsson (1990). Phraseology in Spoken English: Presentation of a Project. In J. AartsW. Meijs (eds.), Theory and Practice in Corpus Linguistics (pp. 1–26). Amsterdam: Rodopi.
Baddeley, A. (1986). Working Memory. Oxford: Clarendon Press.
Baddeley, A. (1990). Human Memory: Theory and Practice. Hove and London: Erlbaum.
Baddeley, A. (1992). Is Working Memory Working? The Fifteenth Bartlett Lecture. Quarterly Journal of Experimental Psychology 44A, 1: 1–31.
Baddeley, A. (1993). Working Memory. Science 255: 556–9.
Baddeley, A. (1998). Recent Developments in Working Memory. Current Opinion in Neurobiology 8, 2(April): 234–8.
Baddeley, A., S. E. Gathercole, and C. Papagno (1998). The Phonological Loop as a Language Learning Device. Psychological Review. 158–73.
Baddeley, A., et al. (1975). Word Length and the Structure of Short-term Memory. Journal of Verbal Learning and Verbal Behavior 14, 6: 575–89.
Berry-Rogghe, G. L. M. (1973). The Computation of Collocations and Their Relevance in Lexical Studies. In A. J. Aitken, R. W. Bailey, and N. Hamilton-Smith (eds.), The Computer and Literary Studies (pp. 103–12). Edinburgh: Edinburgh University Press.
Budanitsky, Alexander and Graeme Hirst (2001). Semantic Distance in WordNet: An Experimental, Application-oriented Evaluation of Five Measures. Workshop on WordNet and Other Lexical Resources, Second meeting of the North American Chapter of the Association for Computational Linguistics, Pittsburgh.
Caplan, D. and G. S. Waters (1990). Short-term Memory and Language Comprehension: a Cortical Review of the Neuropsychological Literature. In G. Villar and T. Shallice (eds.), Neuropsychological Impairments of S.T.M. Cambridge: Cambridge University Press.
Chang, T. M. (1986). Semantic Memory: Facts and Models. Psychological Bulletin 99, 2: 199–220.
Choueka, Y., T. Klein, and E. Neuwitz (1983). Automatical Retrieval of Frequent Idiomatic and Collocational Expressions in a Large Corpus. Journal for Literary and Linguistic Computing 4: 34–8.
Church, Kenneth Ward and Patrick Hands (1990). Word Association Norms, Mutual Information, and Lexicography. Computational Linguistics 16, 1: 22–220.
Collins, A. M. and E. F Loftus (1975). A Spreading Activation Theory of Semantic Processing. Psychological Review 82: 407–28.
Courtney, S. M., L. Petit, M. M. Jose, L. G. Ungerleider, and J. V. Haxby (1998). An Area Specialized for Spatial Working Memory in Human Frontal Cortex. Science 279: 1347–51.
Damasio, Antonio R. (1995). Descartes' Error: Emotion, Reason, and the Human Brain. New York: Avon.
Damasio, Antonio R. and Hanna Damasio (1992). Brain and Language. Scientific American 267, 3: 88–95.
Damasio, Antonio R., Daniel Tranel, and Hanna Damasio (1990). Face Agnosia and the Neural Substrates of Memory. Annual Review of Nearoscience 13: 89–109.
D'Esposito, M., J. A. Detre, D. C. Alsop, R. K. Shin, S. Atlas, and M. Grossman (1995). The Neural Basis of the Central Executive System of Working Memory. Science 378: 279–81.
Eysenck, Michael W. and Mark, T. Keane (1990). Cognitive Psychology: A Student's Handbook. Hove, East Sussex: Erlbaum.
Fletcher, William H. (2002). kfNgram Information & Help. URL: At http://www.chesapeake.net/~fletcher/kfNgramHelp.html.
Gathercole, S. E. and A. D. Baddeley (1993). Working Memory and Language. Hillside, PA: Lawrence Erlbaum.
Geschwind, Norman (1970). The Organization of Language and the Brain. Science 170: 940–4.
Geschwind, Norman (1979). Specializations of the Human Brain. In The Brain (pp. 108–17). A Scientific American Book. San Francisco: W. H. Freeman.
Ghiselin, Brewster, (ed.) (1952). The Creative Process: A Symposium. New York: Mentor.
Grasby, P. M., C. D. Frith, K. J. Friston, C. Bench, R. S. J. Frackowiak, and R. J. Dolan (1993). Functional Mapping of Brain Areas Implicated in Auditory–Verbal Memory Function. Brain 116: 1–20.
Hagoort, Peter (1993). Impairments of Lexical–Semantic Processing in Aphasia: Evidence from the Processing of Lexical Ambiguities. Brain and Language 45: 189–232.
Hoey, M. (1991). Patterns of Lexis in Text. Oxford: Oxford University Press.
Jacoby, Mario (1971). The Muse as a Symbol of Literary Creativity. In Joseph P. Strelka (ed.), Anagogic Qualities of Literature (pp. 36–50). University Park and London: Pennsylvania State University Press.
Just, Marcel A. and Patricia A. Carpenter (1992). A Capacity Theory of Comprehension: Individual Differences in Working Memory. Psychological Review 99: 122–49.
Kertesz, Andrew (1983). Localization of Lesions in Wernicke's Aphasia. In Andrew Kertesz (ed.), Localization in Neuropsychology (pp. 208–30). New York: Academic Press.
Kilgarriff, Adam and David Tugwell (2001). WORD SKETCH: Extraction and Display of Significant Collocations for Lexicography. COLLOCATION: Computational Extraction, Analysis and Exploitation (pp. 32–8). 39th ACL and 10th EACL. Toulouse (July).
Kjellmer, G. (1991). A Mint of Phrases. In K. Aijmer and B. Altenberg (eds.), Corpus Linguistics: Studies in Honour of Jan Svartvik (pp. 111–27). London: Longman.
Kjellmer, Goran (1994). A Dictionary of English Collocations. Oxford: Clarendon Press.
Koenig, Olivier, Corinne Wetzel, and Alfonso Caramazza (1992). Evidence for Different Types of Lexical Representations in the Cerebral Hemispheres. Cognitive Neuropsychology 9, 1: 33–220.
Kosslyn, Stephen M. and Olivier Koenig (1992). Wet Mind: The New Cognitive Neuroscience. New York: Free Press.
Lancashire, Ian (1992a). Chaucer's Repetends from The General Prologue of the Canterbury Tales. In R. A. Taylor, James F. Burke, Patricia J. Eberle, Ian Lancashire, and Brian Merrilees (eds.), The Centre and its Compass: Studies in Medieval Literature in Honor of Professor John Leyerle (pp. 315–65). Kalamazoo, MI: Western Michigan University Press.
Lancashire, Ian (1992b). Phrasal Repetends in Literary Stylistics: Shakespeare's Hamlet III.l. Research in Humanities Computing 4 (pp. 34–68). Selected Papers from the ALLC/ACH Conference, Christ Church, Oxford, April, 1992. Oxford: Clarendon Press, 1996.
Lancashire, Ian (1993a). Chaucer's Phrasal Repetends and The Manciple's Prologue and Tale. In Ian Lancashire (ed.), Computer-Based Chaucer Studies (pp. 99–122). CCHWP 3. Toronto: Centre for Computing in the Humanities.
Lancashire, Ian (1993b). Computer-assisted Critical Analysis: A Case Study of Margaret Atwood's The Handmaid's Tale. In George Landow and Paul Delany (eds.), The Digital Word (pp. 293–318). Cambridge, MA: MIT Press.
Lancashire, Ian (1993c). Uttering and Editing: Computational Text Analysis and Cognitive Studies in Authorship. Texte: Revue de Critique et de Theorie Litteraire 13/14: 173–218.
Lancashire, Ian (1999). Probing Shakespeare's Idiolect in Troilus and Cressida 1.3.1–29. University of Toronto Quarterly 68, 3: 728–67.
Lancashire, Ian, in collaboration with John Bradley, Willard McCarty, Michael Stairs, and T. R. Wooldridge (1996). Using TACT with Electronic Texts: A Guide to Text-Analysis Computing Tools, Version 2.1 for MS-DOS and PC DOS. New York: Modern Language Association of America.
Lavric, A., S. Forstmeier, and G. Rippon (2000). Differences in Working Memory Involvement in Analytical and Creative Tasks: An ERP Study. Neuroreport 11, 8: 1613–220.
Levine, David N. and Eric Sweet (1983). Localization of Lesions in Broca's Motor Aphasia. In Andrew Kertesz (ed.), Localization in Neuropsychology (pp. 185–208). New York: Academic Press.
Lichtheim, L. (1885). On Aphasia. Brain 7: 433–84.
Lieberman, Philip (2000). Human Language and Our Reptilian Brain: The Subcortical Bases of Speech, Syntax, and Thought. Cambridge, MA: Harvard University Press.
Longoni, A. M., J. T. E. Richardson, and A. Aiello (1993). Articulatory Rehearsal and Phonological Storage in Working Memory. Memory and Cognition 21: 11–22.
MacDonald, Maryellen, Marcel A. Just, and Patricia A. Carpenter (1992). Working Memory Constraints on the Processing of Syntactic Ambiguity. Cognitive Psychology 24: 56–98.
Martin, A., J. V. Haxby, F. M. Lalonde, C. L. Wiggs, and L. G. Ungerleider (1995). Discrete Cortical Regions Associated with Knowledge of Color and Knowledge of Action. Science 270: 102–5.
Martindale, Colin (1990). The Clockwork Muse: The Predictability of Artistic Change. New York: Basic Books.
Miles, C., D. M. Jones, and C. A. Madden (1991). Locus of the Irrelevant Speech Effect in Short-term Memory. Journal of Experimental Psychology: Learning, Memory, and Cognition 17: 578–84.
Milic, Louis T. (1971). Rhetorical Choice and Stylistic Option: The Conscious and Unconscious Poles. In Seymour Chatman (ed. and tr.), Literary Style: A Symposium. London and New York: Oxford University Press.
Miller, G. A. (1956). The Magical Number Seven, Plus or Minus Two: Some Limits on our Capacity for Processing Information. Psychological Review 63: 89–97.
Miyake, Akira and Priti Shah (1998). Models of Working Memory: Mechanisms of Active Maintenance and Executive Control. New York: Cambridge University Press.
Ojemann, George A. (1991). Cortical Organization of Language. Journal of Neuroscience 11, 8 (August): 2281–7.
Ojemann, George A., F. Ojemann, E. Lattich, and M. Berger (1989). Cortical Language Localization in Left Dominant Hemisphere: an Electrical Stimulation Mapping Investigation in 117 Patients. Journal of Neurosurgery 71: 316–26.
Paulesu, E., C. Firth, and R. Frackowiak (1993). The Neural Correlates of the Verbal Component of Working Memory. Nature 362: 342–45.
Perani, D., S. Bressi, S. F. Cappa, G. Vallar, M. Alberoni, F. Grassi, C. Caltagirone, L. Cipolotti, M. Franceschi, G. L. Lenzi, and F. Fazio (1993). Evidence of Multiple Memory Systems in the Human Brain. Brain 116: 903–19.
Petersen, S. E. and J. A. Fiez (1993). The Processing of Single Words Studied with Positron Emission Tomography. Annual Review of Neuroscience 16: 509–30.
Pilch, Herbert (1988). Lexical Indeterminacy. In E. G. Stanley and T. F. Hoad (eds.), Words for Robert Burchfield's Sixty-fifth Anniversary (pp. 133–41). Cambridge: D. S. Brewer.
Plimpton, George, (ed.) (1988). Writers at Work: The Paris Review Interviews. London: Penguin.
Plimpton, George, (ed.) (1989). The Writer's Chapbook: A Compendium of Fact, Opinion, Wit, and Advice from the 20th Century's Preeminent Writers, rev. edn. London: Penguin Books.
Posner, Michael I. and Marcus E. Raichle (1997). Images of Mind (pp. 117–18, 123–9). New York: Scientific American Library.
Schacter, Daniel L., C.-Y Peter Chiu, and Kevin N. Ochsner (1993). Implicit Memory: A Selective Review. Annual Review of Neuroscience 16, 159–82.
Schweickert, R. and B. Boriff (1986). Short-term Memory Capacity: Magic Number or Magic Spell? Journal of Experimental Psychology: Learning, Memory, and Cognition 12, 3: 419–25.
Scott, Mike and Geoff Thompson, (eds.) (2001). Patterns of Text in Honour of Michael Hoey. Amsterdam: John Benjamins.
Shallice, T. and B. Butterworth (1977). Short-term Memory and Spontaneous Speech. Neuropsychologia 15:729–35.
Shawver, Gary (1999). A Chaucerian Narratology: "Storie" and "Tale" in Chaucer's Narrative Practice. PhD thesis, University of Toronto. Cf. homepages.nyu.edu/~gs74/nonComIndex.html#4.
Sheridan, Jenny and Glyn W. Humphreys (1993). A Verbal-semantic Category-specific Recognition Impairment. Cognitive Neuropsychology 10, 2: 143–220.
Shimamura, Arthur P., Felicia B. Gershberg, Paul J. Jurica, Jennifer A. Mangels, and Robert T. Knight (1992). Intact Implicit Memory in Patients with Frontal Lobe Lesions. Neuropsychologia 30, 10: 931–220.
Sinclair, John (1991). Corpus Concordance, Collocation. Oxford: Oxford University Press.
Smadja, Frank (1994). Retrieving Collocations from Text: Xtract. Computational Linguistics 19, 1: 143–77.
Squire, Larry R. (1987). Memory and Brain. New York: Oxford University Press.
Tulving, E. (1983). Elements of Episodic Memory. Oxford: Oxford University Press.
Turner, Mark (1991). Reading Minds: The Study of English in the Age of Cognitive Science. Princeton, NJ: Princeton University Press.
Vallar, G., A. M. D. Betta, and M. C. Silveri (1997). The Phonological Short-term Store-rehearsal System. Neuropsychologia 35: 795–812.
Wachtel, Eleanor, (ed.) (1993). Writers and Company. Toronto: Alfred A. Knopf Canada.
Winter, Nina (1978). Interview with the Muse: Remarkable Women Speak on Creativity and Power. Berkeley, CA: Moon.
Zola-Morgan, S. and L. R. Squire (1993). Neuroanatomy of Memory. Annual Review of Neuroscience 16: 547–63.