Knowing … : Modeling in Literary Studies
When the understanding of scientific models and archetypes comes to be regarded as a reputable part of scientific culture, the gap between the sciences and the humanities will have been partly filled. For exercise of the imagination, with all its promises and its dangers, provides a common ground. (Max Black, Models and Metaphors(1962): 242f)
At the beginning of their important book, Understanding Computers and Cognition, Terry Winograd and Fernando Flores declare that, "All new technologies develop within a background of tacit understanding of human nature and human work. The use of technology in turn leads to fundamental changes in what we do, and ultimately what it is to be human. We encounter deep questions of design when we recognize that in designing tools we are designing ways of being" (1987: xi).
Because of computing, Brown and Duguid note in The Social Life of Information, "We are all, to some extent, designers now" (2000: 4). For us no qualification is necessary, especially so because of our commitment to a computing that is of as well as in the humanities. So, Winograd and Flores' question is ours: what ways of being do we have in mind? And since we are scholars, this question is also, perhaps primarily, what ways of knowing do we have in hand? What is the epistemology of our practice?
Two years after the book by Winograd and Flores, two scholarly outsiders were invited to the first joint conference of the Association for Computers and the Humanities (ACH) and Association for Literary and Linguistic Computing (ALLC) to comment on our work as it then was: the archaeological theoretician Jean-Claude Gardin and the literary critic Northrop Frye. Their agreement about the central aspect of computing for the humanities and their implicit divergence over how best to apply it provide the starting point for this chapter and my own response to the epistemological question just raised.
Both Gardin, in "On the ways we think and write in the humanities" (1991), and Frye, in "Literary and mechanical models" (1991), roughly agree on two matters:
1. Quantitative gains in the amount of scholarly data available and accessible are with certain qualifications a Good Thing — but not the essential thing. Gardin compares the building of large resources to the great encyclopedia projects, which he regards as intellectually unimportant. (Perhaps this is a blindness of the time: it is now quite clear that the epistemological effects of these large resources are profound, though they clearly do not suit his interests and agenda.) Frye is less dismissive. He notes that the mechanically cumulative or "Wissenschaft" notion of scholarship now has a machine to do it better than we can. But like Gardin his focus is elsewhere, namely —
2. Qualitatively better work, proceeding from a principled basis within each of the disciplines. This is the central point of computing to both Gardin and Frye: work that is disciplined, i.e., distinguishable from the intelligent but unreliable opinion of the educated layperson.
Gardin takes what he calls a "scientific" approach to scholarship, which means reduction of scholarly argument to a Turing-machine calculus, then use of simulation to test the strength of arguments.
Frye's interest is in studying the archetypes or "recurring conventional units" of literature; he directs attention to computer modeling techniques as the way to pursue this study.
Thus both scholars propose computing as a means of working toward a firmer basis for humanities scholarship. Gardin refers to "simulation," Frye to "modeling." Both of these rather ill-defined terms share a common epistemological sense: use of a likeness to gain knowledge of its original. We can see immediately why computing should be spoken of in these terms, as it represents knowledge of things in manipulable form, thus allows us to simulate or model these things. Beyond the commonsense understanding, however, we run into serious problems of definition and so must seek help.
My intention here is to summarize the available help, chiefly from the history and philosophy of the natural sciences, especially physics, where most of the relevant work is to be found. I concentrate on the term "modeling" because that is the predominate term in scientific practice — but it is also, perhaps even by nature, the term around which meanings I find the most useful tend to cluster. I will then give an extended example from humanities computing and discuss the epistemological implications. Finally I will return to Jean-Claude Gardin's very different agenda, what he calls "the logicist programme," and to closely allied questions of knowledge representation in artificial intelligence.
As noted, I turn to the natural sciences for wisdom on modeling because that's where the most useful form of the idea originates. Its usefulness to us is, I think, because of the kinship between the chiefly equipment-oriented practices of the natural sciences and those of humanities computing. Here I summarize briefly an argument I have made elsewhere at length (2005: 20–72).
Here I must be emphatic: I am not suggesting in any way that the humanities emulate these sciences, nor that humanities computing become a science, nor that the humanities will take on scientific qualities through computing. My argument is that we can learn from what philosophers and historians have had to say about experimental practice in the natural sciences because humanities computing is, like them, an equipment-oriented field of enquiry. It thus provides, I will suggest later, access to a common ground of concerns.
The first interesting point to be made is that despite its prevalence and deep familiarity in the natural sciences, a consensus on modeling is difficult to achieve. Indeed, modeling is hard to conceptualize. There is "no model of a model," the Dutch physicist H. J. Groenewold declares (1961: 98), and the American philosopher Peter Achinstein warns us away even from attempting a systematic theory (1968: 203). Historians and philosophers of science, including both of these, have tried their best to anatomize modeling on the basis of the only reliable source of evidence — namely actual scientific practice. But this is precisely what makes the conceptual difficulty significant: modeling grows out of practice, not out of theory, and so is rooted in stubbornly tacit knowledge — i.e., knowledge that is not merely unspoken but may also be unspeakable. The ensuing struggle to understand its variety yields some useful distinctions, as Achenstein says. The fact of the struggle itself points us in turn to larger questions in the epistemology of practice — to which I will return.
The most basic distinction is, in Clifford Geertz's terms, between "an 'of' sense and a 'for' sense" of modeling (1993/1973: 93). A model of something is an exploratory device, a more or less "poor substitute" for the real thing (Groenewold 1961: 98). We build such models-of because the object of study is inaccessible or intractable, like poetry or subatomic whatever-they-are. In contrast a model for something is a design, exemplary ideal, archetype or other guiding preconception. Thus we construct a model of an airplane in order to see how it works; we design a model for an airplane to guide its construction. A crucial point is that both kinds are imagined, the former out of a pre-existing reality, the latter into a world that doesn't yet exist, as a plan for its realization.
In both cases, as Russian sociologist Teodor Shanin has argued, the product is an ontological hybrid: models, that is, bridge subject and object, consciousness and existence, theory and empirical data (1972: 9). They comprise a practical means of playing out the consequences of an idea in the real world. Shanin goes on to argue that models-of allow the researcher to negotiate the gulf between a "limited and selective consciousness" on the one hand and "the unlimited complexity and 'richness' of the object" on the other (1972: 10). This negotiation happens, Shanin notes, "by purposeful simplification and by transformation of the object of study inside consciousness itself." In other words, a model-of is made in a consciously simplifying act of interpretation. Although this kind of model is not necessarily a physical object, the goal of simplification is to make tractable or manipulable what the modeler regards as interesting about it.
A fundamental principle for modeling-of is the exact correspondence between model and object with respect to the webs of relationships among the selected elements in each. Nevertheless, such isomorphism (as it is called) may be violated deliberately in order to study the consequences. In addition to distortions, a model-of may also require "properties of convenience," such as the mechanism by which a model airplane is suspended in a wind-tunnel. Thus a model-of is fictional not only by being a representation, and so not the thing itself, but also by selective omission and perhaps by distortion and inclusion as well.
Taxonomies of modeling differ, as noted. Among philosophers and historians of science there seems rough agreement on a distinction between theoretical and physical kinds, which are expressed in language and material form respectively (Black 1962: 229). Computational modeling, like thought-experiment, falls somewhere between these two, since it uses language but is more or less constrained by characteristics of the medium of its original.
From physical modeling we can usefully borrow the notion of the "study" or "tinker-toy" model — a crude device knowingly applied out of convenience or necessity (Achinstein 1968: 209). Indeed, in the humanities modeling seems as a matter of principle to be crude, "a stone adze in the hands of a cabinetmaker," as Vannevar Bush said (Bush 1967: 92). This is not to argue against progress, which is real enough for technology, rather that deferral of the hard questions as solutions inevitably to be realized — a rhetorical move typical in computing circles — is simply irresponsible.
Theoretical modeling, constrained only by language, is apt to slip from a consciously makeshift, heuristic approximation to hypothesized reality. Black notes that in such "existential use of modeling" the researcher works "through and by means of" a model to produce a formulation of the world as it actually is (1962: 228f). In other words, a theoretical model can blur into a theory. But our thinking will be muddled unless we keep "theory" and "model" distinct as concepts. Shanin notes that modeling may be useful, appropriate, stimulating, and significant — but by definition never true (1972: 11). It is, again, a pragmatic strategy for coming to know. How it contrasts with theory depends, however, on your philosophical position. There are two major ones.
To the realist theories are true. As we all know, however, theories are overturned. To the realist, when this happens — when in a moment of what Thomas Kuhn called "extraordinary science" a new theory reveals an old one to be a rough approximation of the truth (as happened to Newtonian mechanics about 100 years ago) — the old theory becomes a model and so continues to be useful.
To the anti-realist, such as the historian of physics Nancy Cartwright, the distinction collapses. As she says elegantly in her "simulacrum account" of physical reality, How the Laws of Physics Lie, "the model is the theory of the phenomenon" (1983: 159). Since we in the humanities are anti-realists with respect to our theories (however committed we may be to them politically), her position is especially useful: it collapses the distinction between theory and theoretical model, leaving us to deal only with varieties of modeling. This, it should be noted, also forges a link between on our terms between humanities computing and the theorizing activities in the various disciplines.
Since modeling is pragmatic, the worth of a model must be judged by its fruitfulness. The principle of isomorphism means, however, that for a model-of, this fruitfulness is meaningful in proportion to the "goodness of the fit" between model and original, as Black points out (1962: 238). But at the same time, more than a purely instrumental value obtains. A model-of is not constructed directly from its object; rather, as a bridge between theory and empirical data, the model participates in both, as Shanin says. In consequence a good model can be fruitful in two ways: either by fulfilling our expectations, and so strengthening its theoretical basis, or by violating them, and so bringing that basis into question. I argue that from the research perspective of the model, in the context of the humanities, failure to give us what we expect is by far the more important result, however unwelcome surprises may be to granting agencies. This is so because, as the philosopher of history R. G. Collingwood has said, "Science in general … does not consist in collecting what we already know and arranging it in this or that kind of pattern. It consists in fastening upon something we do not know, and trying to discover it That is why all science begins from the knowledge of our own ignorance: not our ignorance of everything, but our ignorance of some definite thing … " (1993/1946: 9). When a good model fails to give us what we expect, it does precisely this: it points to "our ignorance of some definite thing." The rhetoric of modeling begins, Achinstein suggests, with analogy — in Dr Johnson's words, "resemblance of things with regard to some circumstances or effects." But as Black has pointed out, the existential tendency in some uses of modeling pushes it from the weak statement of likeness in simile toward the strong assertion of identity in metaphor. We know that metaphor paradoxically asserts identity by declaring difference: "Joseph is a fruitful bough" was Frye's favorite example. Metaphor is then characteristic not of theory to the realist, for whom the paradox is meaningless, but of the theoretical model to the anti-realist, who in a simulacrum-account of the world will tend to think paradoxically. This is, of course, a slippery slope.
But it is also a difficult thought, so let me try again. Driven as we are by the epistemological imperative, to know; constrained (as in Plato's cave or before St Paul's enigmatic mirror) to know only poor simulacra of an unreachable reality — but aware somehow that they are shadows of something — our faith is that as the shadowing (or call it modeling) gets better, it approaches the metaphorical discourse of the poets.
If we are on the right track, as Max Black says at the end of his essay, "some interesting consequences follow for the relations between the sciences and the humanities," namely their convergence (1962: 242f). I would argue that the humanities really must meet the sciences half-way, on the common ground that lies between, and that we have in humanities computing a means to do so.
In Humanities Computing: an Example
Let me suggest how this might happen by giving an example of modeling from my own work modeling personification in the Roman poet Ovid's profoundly influential epic, the Metamorphoses. Personification — "the change of things to persons" by rhetorical means, as Dr Johnson said — is central to Ovid's relentless subversion of order. His primary use of this literary trope is not through the fully personified characters, such as Envy (in book 2), Hunger (in book 8) and Rumor (in book 12), rather through the momentary, often incomplete stirrings toward the human state. These figures are contained within single phrases or a few lines of the poem; they vanish almost as soon as they appear. Many if not most of the approximately 500 instances of these tend to go unnoticed. But however subtle, their effects are profound.
Unfortunately, little of the scholarship written since classical times, including James Paxson's study of 1994, helps at the minute level at which these operate. In 1963, however, the medievalist Morton Bloomfield indicated an empirical way forward by calling for a "grammatical" approach to the problem. He made the simple but profoundly consequential observation that nothing is personified independently of its context, only when something ontologically unusual is predicated of it. Thus Dr Johnson's example, "Confusion heard his voice." A few other critics responded to the call, but by the early 1980s the trail seems to have gone cold. It did so, I think, because taking it seriously turned out to involve a forbidding amount of Sitzfleisch. Not to put too fine a point on it, the right technology was not then conveniently to hand.
It is now, of course: software furnishes a practical means for the scholar to record, assemble, and organize a large number of suitably minute observations, to which all the usual scholarly criteria apply. That much is obvious. But seeing the methodological forest requires us to step back from the practical trees. The question is, what kind of data-model best suits the situation? Unfortunately, the two primary requirements of modeling literature — close contact with the text, and rapid reworking of whatever model one has at a given moment — are not well satisfied by the two obvious kinds of software, namely text-encoding and relational database design. The former keeps one close to the text but falls short on the manipulatory tools; the latter supplies those tools but at the cost of great distance from the text. (Given a choice, the latter is preferable, but clearly the right data-model for this kind of work remains to be designed.) The lesson to be learned here, while we stumble along with ill-suited tools, is that taking problems in the humanities seriously is the way forward toward better tools. Adding features or flexibilities to existing tools is not.
But let us notice where we are with the literary problem. We begin with a theoretical model of personification — let us call it T — suitable to the poem. T assumes a conventional scala naturae, or what Arthur Lovejoy taught us to call "the great chain of being," in small steps or links from inanimate matter to humanity. In response to the demands of the Metamorphoses personification is defined within T as any shift up the chain to or toward the human state. Thus T focuses, unusually, on ontological change per se, not achievement of a recognizable human form. T incorporates the Bloomfield hypothesis but says nothing more specific about how any such shift is marked. With T in mind, then, we build a model by analyzing personifications according to the linguistic factors that affect their poetic ontology.
But what are these factors? On the grammatical level the most obvious, perhaps, is the verb that predicates a human action to a non-human entity, as in Dr Johnson's example. But that is not all, nor is it a simple matter: different kinds of entities have different potential for ontological disturbance (abstract nouns are the most sensitive); verbs are only one way of predicating action, and action only one kind of personifying agent; the degree to which an originally human quality in a word is active or fossilized varies; and finally, the local and global contexts of various kinds affect all of these in problematic ways. It is, one is tempted to say, all a matter of context, but as Jonathan Culler points out, "one cannot oppose text to context, as if context were something other than more text, for context is itself just as complex and in need of interpretation" (1988: 93f).
Heuristic modeling rationalizes the ill-defined notion of context into a set of provisional but exactly specified factors through a recursive cycle. It goes like this. Entity X seems to be personified; we identify factors A, B, and C provisionally; we then encounter entity Y, which seems not to qualify even though it has A, B, and C; we return to X to find previously overlooked factor D; elsewhere entity Z is personified but has only factors B and D, so A and C are provisionally downgraded or set aside; and so on. The process thus gradually converges on a more or less stable phenomenology of personification. This phenomenology is a model according to the classical criteria: it is representational, fictional, tractable, and pragmatic. It is a computational model because the analysis that defines it obeys two fundamental rules: total explicitness and absolute consistency. Thus everything to be modeled must be explicitly represented, and it must be represented in exactly the same way every time.
The imaginative language of poetry doesn't survive well under such a regime. But this is only what we expect from modeling, during which (as Teodor Shanin said) the "limited and selective consciousness" of the modeler comes up against "the unlimited complexity and 'richness' of the object." In the case of poetry the result can only be a model of the tinker-toy variety, Vannevar Bush's "stone adze in the hands of a cabinetmaker." Nevertheless, with care, scholarly furniture may be made with this adze. In return for "suspension of ontological unbelief," as Black said about models generally (1962: 228), modeling gives us manipulatory power over the data of personification. With such a model we can then engage in the second-order modeling of these data by adjusting the factors and their weightings, producing different results and raising new questions. The model can be exported to other texts, tried out on them in a new round of recursive modeling, with the aim of producing a more inclusive model, or a better questions about personification from which a better model may be constructed. This is really the normal course of modeling in the sciences as well: the working model begins to converge on the theoretical model.
At the same time, Edward Sapir famously remarked, "All grammars leak" (1921: 47). The failures of the model — the anomalous cases only special pleading would admit — are the leaks that reflect questioningly back on the theoretical model of the Metamorphoses and so challenge fundamental research. They point, again as Collingwood said, to our ignorance of a particular thing.
This is, I think, what can now come of what Northrop Frye had in mind, when he suggested in 1989 that were he writing the Anatomy of Criticism afresh he'd be paying a good deal of attention to computational modeling.
As I have defined and illustrated it, modeling implies the larger environment of experimentation and so raises the question of what this is and what role it might have in the humanities. I have argued elsewhere that our heuristic use of equipment, in modeling, stands to benefit considerably from the intellectual kinship this use has with the experimental sciences (McCarty 2002). Since Paul Feyerabend's attack on the concept of a unitary scientific method in Against Method (1975) and Ian Hacking's foundational philosophy of experiment in Representing and Intervening (1983), two powerful ideas have been ours for the thinking: (1) experiment is an epistemological practice of its own, related to but not dependent on theory; and (2) experiment is not simply heuristic but, in the words the literary critic Jerome McGann borrowed famously from Lisa Samuels, is directed to "imagining what we don't know" (2001: 105ff), i.e., to making new knowledge. This is the face that modeling turns to the future, that which I have called the "model-for," but which must be understood in Hacking's interventionist sense.
In the context of the physical sciences, Hacking argues that we make hypothetical things real by learning how to manipulate them; thus we model-for them existentially in Black's sense. But is this a productive way to think about what happens with computers in the humanities? If it is, then in what sense are the hypothetical things of the humanities realized? — for example, the "span" in corpus linguistics; the authorial patterns John Burrows, Wayne McKenna, and others demonstrate through statistical analysis of the commonest words; Ian Lancashire's repetends; my own evolving phenomenology of personification. Ontologically, where do such entities fit between the reality of the object on the one hand and the fiction of the theoretical model on the other? They are not theoretical models. What are they?
Models-for do not have to be such conscious things. They can be the serendipitous outcome of play or of accident. What counts in these cases, Hacking notes, is not observation but being observant, attentive not simply to anomalies but to the fact that something is fundamentally, significantly anomalous — a bit of a more inclusive reality intruding into business as usual.
Knowledge Representation and the Logicist Program
The conception of modeling I have developed here on the basis of practice in the physical sciences gives us a crude but useful machine with its crude but useful tools and a robust if theoretically unattached epistemology. It assumes a transcendent, imperfectly accessible reality for the artifacts we study, recognizes the central role of tacit knowledge in humanistic ways of knowing them and, while giving us unprecedented means for systematizing these ways, is pragmatically anti-realist about them. Its fruits are manipulatory control of the modeled data and, by reducing the reducible to mechanical form, identification of new horizons for research.
Jean-Claude Gardin's logicist program, argued in his 1989 lecture and widely elsewhere, likewise seeks to reduce the reducible — in his case, through field-related logics to reduce humanistic discourse to a Turing-machine calculus so that the strength of particular arguments may be tested against expert systems that embed these structures. His program aims to explore, as he says, "where the frontier lies between that part of our interpretative constructs which follows the principles of scientific reasoning and another part which ignores or rejects them" (1990: 26). This latter part, which does not fit logicist criteria, is to be relegated to the realm of literature, i.e., dismissed from scholarship, or what he calls "the science of professional hermeneuticians" (1990: 28). It is the opposite of modeling in the sense that it regards conformity to logical formalizations as the goal rather than a useful but intellectually trivial byproduct of research. Whereas modeling treats the ill-fitting residue of formalization as meaningfully problematic and problematizing, logicism exiles it — and that, I think, is the crucial point.
The same point is emerges from the subfield of AI known as "knowledge representation," whose products instantiate the expertise at the heart of expert systems. In the words of John Sowa, KR (as it is known in the trade) plays "the role of midwife in bringing knowledge forth and making it explicit," or more precisely, it displays "the implicit knowledge about a subject in a form that programmers can encode in algorithms and data structures" (2000: xi). The claim of KR is very strong, namely to encode all knowledge in logical form. "Perhaps," Sowa remarks in his book on the subject, "there are some kinds of knowledge that cannot be expressed in logic." Perhaps, indeed. "But if such knowledge exists," he continues, "it cannot be represented or manipulated on any digital computer in any other notation" (2000: 12).
He is, of course, correct to the extent that KR comprises a far more rigorous and complete statement of the computational imperatives I mentioned earlier. We must therefore take account of it. But again the problematic and problematizing residue is given short shrift. It is either dismissed offhandedly in a "perhaps" or omitted by design: the first principle of KR defines a representation as a "surrogate," with no recognition of the discrepancies between stand-in and original. This serves the goals of aptly named "knowledge engineering" but not those of science, both in the common and etymological senses. The assumptions of KR are profoundly consequential because, the philosopher Michael Williams has pointed out, "'know' is a success-term like 'win' or 'pass' (a test). Knowledge is not just a factual state or condition but a particular normative status." "Knowledge" is therefore a term of judgment; projects, like KR, which demarcate it "amount to proposals for a map of culture: a guide to what forms of discourse are 'serious' and what are not" (2001: 11–12). So again the point made by Winograd and Flores: "that in designing tools we are designing ways of being" and knowing.
I have used the term "relegate" repeatedly: I am thinking of the poet Ovid, whose legal sentence of relegatio, pronounced by Augustus Caesar, exiled him to a place so far from Rome that he would never again hear his beloved tongue spoken. I think that what is involved in our admittedly less harsh time is just as serious.
But dangerous to us only if we miss the lesson of modeling and mistake the artificial for the real. "There is a pernicious tendency in the human mind," Frye remarked in his 1989 lecture, "to externalize its own inventions, pervert them into symbols of objective mastery over us by alien forces. The wheel, for example, was perverted into a symbolic wheel of fate or fortune, a remorseless cycle carrying us helplessly around with it" (1991: 10). Sowa, who has a keen historical sense, describes in Knowledge Representation the thirteenth-century Spanish philosopher Raymond Lull's mechanical device for automated reasoning, a set of concentric wheels with symbols imprinted on them. We must beware that we do not pervert our wheels of logic into "symbols of objective mastery" over ourselves but use them to identify what they cannot compute. Their failures and every other well-crafted error we make are exactly the point, so that (now to quote the Bard precisely) we indeed are "Minding true things by what their mock'ries be" (Henry V iv.53).
1 This essay is a substantial revision of a plenary address delivered at the conference of the Consortium for Computers in the Humanities/Consortium pour ordinateurs in sciences humaines (COSH/COCH), May 26, 2002, at Victoria College, University of Toronto, and subsequently published in Text Technology 12.1 (2003) and Computers in the Humanities Working Papers A.25.
Achinstein, Peter (1968). Concepts of Science: A Philosophical Analysis. Baltimore MD: Johns Hopkins Press.
Black, Max (1962). Models and Metaphors: Studies in Language and Philosophy. Ithaca, NY: Cornell University Press.
Brown, John Seely, and Paul Duguid (2000). The Social Life of Information. Boston MA: Harvard Business School Press.
Bush, Vannevar (1967). "Memex Revisited." In Science is Not Enough. New York: William Morrow and Company, pp. 75–101.
Cartwright, Nancy (1983). How the Laws of Physics Lie. Oxford: Clarendon Press.
Collingwood, R. G. (1993 ). The Idea of History, rev. edn. (Ed. Jan van der Dussen). Oxford: Oxford University Press.
Culler, Jonathan (1988). Framing the Sign: Criticism and its Institutions. Oxford: Basil Blackwell.
Feyerabend, Paul K. (1993 ). Against Method, 3rd edn. London: Verso.
Frye, Northrop (1991). "Literary and Mechanical Models." In Ian Lancashire (Ed.). Research in Humanities Computing 1. Selected Papers from the 1989 ACH-ALLC Conference. Oxford: Clarendon Press, pp. 3–12.
Gardin, Jean-Claude (1990). "L'interprétation dans les humanités: réflexions sur la troisième voie/Interpretation in the humanities: some thoughts on the third way." In Richard Ennals and Jean-Claude Gardin (Eds.). Interpretation in the Humanities: Perspectives from Artificial Intelligence. Boston Spa: British Library Publications, pp. 22–59.
Gardin, Jean-Claude (1991). "On the Way We Think and Write in the Humanities: A Computational Perspective." In Ian Lancashire (Ed.). Research in Humanities Computing 1. Papers from the 1989 ACH-ALLC Conference. Oxford: Clarendon Press, pp. 337–45.
Geertz, Clifford (1993 ). The Interpretation of Cultures: Selected Essays. London: Fontana Press.
Groenewold, H. J. (1961). "The Model in Physics." In Hans Freudenthal (Ed.). The Concept and the Role of the Model in Mathematics and Natural and Social Sciences. Dordrecht: D. Reidel, pp. 98–103.
Hacking, Ian (1983). Representing and Intervening: Introductory Topics in the Philosophy of Natural Science. Cambridge: Cambridge University Press.
McCarty, Willard (2002). "A Network with a Thousand Entrances: Commentary in an Electronic Age?" In Roy K. Gibson and Christina Shuttleworth Kraus (Eds.). The Classical Commentary: Histories, Practices, Theory. Leiden: Brill, pp. 359–402.
McCarty, Willard (2005). Humanities Computing. Basingstoke: Palgrave.
McGann, Jerome (2001). Radiant Textuality: Literature after the World Wide Web. New York: Palgrave.
Sapir, Edward. 1921. Language: An Introduction to Speech. New York: Harcourt, Brace and World.
Shanin, Teodor (1972). "Models in Thought." In Teodor Shanin (Ed.). Rules of the Game: Cross-Disciplinary Essays on Models in Scholarly Thought. London: Tavistock, pp. 1–22.
Sowa, John F. (2000). Knowledge Representation: Logical, Philosophical, and Computational Foundations. Pacific Grove CA: Brooks/Cole.
Williams, Michael (2001). Problems of Knowledge: A Critical Introduction to Epistemology. Oxford: Oxford University Press.
Winograd, Terry, and Fernando Flores (1987). Understanding Computers and Cognition: A New Foundation for Design. Boston: Addison-Wesley.