Roopika Risam is Associate Professor of Digital Humanities and Social Engagement at Dartmouth. Her research focuses on data histories, ethics, and practices at intersections of postcolonial and African diaspora studies, digital humanities, and critical university studies.
Alex is Senior Lecturer II and Associate Research Faculty of Digital Humanities in the Department of Spanish and Portuguese at Yale University, where he teaches introductory and advanced courses in digital humanities, and runs project-based learning and collective research initiatives. Before joining Yale, Alex served for ten years as Digital Scholarship Librarian at Columbia University, where he co-created and nurtured the Butler Studio and the Group for Experimental Methods in Humanistic Research. His research interests include Caribbean culture and history, digital humanities and technology design for different infrastructural and socio-economic environments, and the ownership and material extent of the cultural and scholarly record.
This is the source
Minimal computing is the answer. Minimal computing is easy. Minimal computing consists of multiple methods, some as yet to be imagined. Minimal computing will end pandemics/thwart Putin/stop climate change. Minimal computing will save the humanities. Minimal computing is a false prophet. Minimal computing distracts from the pressing issues of the day. Minimal computing is static site generation. Minimal computing is hard. Minimal computing is not the answer.
Perhaps all of these statements made, explicitly or implicitly, throughout this special issue of
Defining minimal computing is as quixotic a task as defining digital humanities
itself innovation
is defined by newness,
scale, or scope. Broadly speaking, minimal computing connotes digital humanities
work undertaken in the context of some set of constraints. This could include lack
of access to hardware or software, network capacity, technical education, or even
a reliable power grid
Minimal computing is an approach that, first and foremost, advocates for using minimal
of minimal computing
implies ease for all users
or prescribes acceptable types of hardware, software, and platforms (e.g., Jekyll,
Arduino, and Raspberry PI).Minimal Computing™
to denote this
misperception.
Our observations here may not satisfy those expecting a more concrete definition
of minimal computing. Therefore, we offer the following: minimal computing is
perhaps best understood as a heuristic comprising four questions to determine what
is, in fact, necessary and sufficient when developing a digital humanities project
under constraint: 1) what do we need?
; 2) what do we have
; 3) what must we
prioritize?
; and 4) what are we willing to give up?
What do we need?
is a question that echoes throughout essays and case studies in
this special issue. In 2016, Alex, along with Élika Ortega, posed this question in
their essay,
needis one intended to cut through a tendency in digital humanities to valorize keeping pace with trends towards high-speed computing, acquisition of the latest computational technologies, and fetishization of the cutting edge. Sometimes — perhaps often — when we pause to consider what we
innovativefor the sake of innovation can be a deterrent to actually completing (or even starting) a project. For example, Roopika (Roopsi)
this-platform-is-digital-humanitiesand
that-platform-is-not-digital-humanitiesbecause, ultimately, digital humanities is not defined by which platform one uses but what one does with it.
easybut expensive tool like NVivo. When we reorient our praxis around the question of
The question of What do we have?
is equally as critical because it encourages
practitioners to focus on the assets available to them and thus resists a deficit
mindset for those of us who are working under constraints. Quite often, the lauded
models of digital humanities scholarship are projects developed with significant
institutional resources and grant funding. It’s easy to look at those examples and
focus on what we
what do we have?at the heart of their digital humanities initiatives with students. They focused on the resources they had — archival holdings on early 20th-century local history, their collective knowledge, their library server, and existing faculty professional development programs that they could reappropriate to build a digital humanities internship program for students
Following on the questions of what we need and what we have, asking What must we
prioritize?
is essential to the mode of thinking for which minimal computing
advocates. When working under constraints, we cannot treat all competing
priorities in a project as equally important. When developing a recent project,
Roopsi chose to use WordPress rather than the Jekyll static site generator for the
website. While using Jekyll would have reduced maintenance and increased security
because it does not rely on a database — two features that Roopsi prefers —
WordPress’s graphical user interface (GUI) made website updates easier for all her
collaborators.What must we prioritize?
speaks to the fact that minimal computing is not
prescriptive or advocating for the use of particular software, hardware, or
platforms, but rather points to a decision-making process that responds to the
constraints of a given situation for project development.
The final question for minimal computing is, What are we willing to give up?
In
environments in which we are contending with limitations, whether of
infrastructure, finances, labor, and/or technical knowledge, among other factors,
we simply cannot have it all. There are tough decisions to be made, taking into
account what we need, what we have, and what we must prioritize. This could mean
eschewing the latest, flashiest methods that would cost more money, time, and
labor, in favor of a simpler approach that would be practically achievable with
what we have. Or it could mean choosing a platform that does not meet every
desired requirement of a team but still makes it possible for a project to move
forward.You can’t always get what you want but if you try
sometimes, well, you just might find you get what you need
The minimal
in minimal computing
therefore stands in stark contrast to an
implied maximal,
where maximal
connotes design choices that are made without
putting the question of what is necessary and sufficient at the heart of
decision-making. The primary contribution of minimal computing to digital
humanities is to draw attention to the fact that the decisions we make when
designing digital humanities projects — our use of particular kinds of hardware,
software, and platforms — are not inherently virtuous (or lacking in virtue) but
are inevitably encumbered by opportunities and challenges, affordances and
limitations, and benefits and tradeoffs by nature. The authors in this issue
grapple with these very concerns — just as the two of us do as practitioners —
each coming to different answers that are responsive to the local contexts in
which their theories and practices are being developed and the constraints of
these environments. This is our minimal computing.
Minimal computing in the humanities — like digital humanities itself — emerges from many parallel and intersecting origin stories. Rather than tracing the genealogies that led us to this special issue of
what do we need?; 2)
what do we have; 3)
what must we prioritize?; 4)
what are we willing to give up?— in the foreseeable future.
The first tension implied in minimal computing is between the constant drive
towards larger, faster, always-on forms of computing and the infrastructural,
institutional, and financial realities that constrain digital humanities project
development. In the context of academic research, this tension dates back to the
birth of modern computers, long before the advent of the personal computer or the
Internet. At that time, only elite universities in wealthy countries could afford
mainframe computers for computational labor. Today, this impulse takes many forms
in digital humanities, including the valorization of cluster computing, high speed
Internet, cloud computing, and big data. Those supporting and relying on
computation who work on the creation of online publications cite the need for
user-friendly data entry (i.e., GUIs in web browsers) or continuous publication of
new materials. Those working in cultural analytics, text analysis, and the like
cite the need for large data processing capacity in the drive towards new insights
derived algorithmically from large data sets of cultural corpora. Implied in this
narrow definition of innovation
is an access differential: those who have access
to such technologies and those who do not.
The implications of this attitude have substantial impact on the future directions
of digital humanities research. For example, grant funding and institutional
support for digital humanities scholarship follows this mentality, exacerbating
existing inequities. The creation of projects that rely solely on the Internet for
distribution, thereby excluding large groups of scholars around the world from
access, offers another example. While researchers in the Global North are
beginning to develop more nuanced understandings of the asymmetrical ecologies of
access to and use of technology — which researchers of the Global South have long
understood — the expenses tied to newer, faster, and bigger technologies continue
to have material implications on the ground.Global North
and Global South
as shorthand for the divide between high-income and
low-income economies produced by colonialism. However, we recognize that like
related terms such as developed countries,
the Third World,
and the West
it has limitations in its tendencies towards homogenization and its
geographical accuracy.
The second tension is a relatively newer one between metaphorical computer
literacy — the ability to use GUIs, an act in which most people with computational
devices engage through their ordinary interactions online — and symbolic
computational literacy, or the ability to code,
which remains the purview of a
rare few, especially in the humanities. Much has been made of the debate over
whether one must code to be a digital humanist
The history of digital humanities is marked by many laudable efforts to create
tools with GUIs that allow scholars to create digital scholarship in the
humanities without having to develop much symbolic computational literacy.
However, these tools are inextricably linked to the dominance of English as a
lingua franca for programming and markup languages, with downstream implications
for those working with languages other than English — namely the emphasis on
Anglophone scholarship in digital humanities and the comparative underdevelopment
of multilingual digital humanities, particularly languages in scripts other than
Latin and those read from right to left
The minimal
of minimal computing
has been assumed by some to promise ease of
use or to only require a minimal amount of symbolic computational literacy.
Certainly, a team might choose to use a GUI because it best serves the questions
of what we need, have, must prioritize, and are willing to give up. However, like
all design decisions, such a choice comes with consequences. While those who doubt
their ability to learn how to code see the use of GUI-driven platforms as the key
to access, often these systems foreclose more control over the production of
knowledge, and by extension, participation that is more meaningful to those who
seek access. As Matthew Kirschenbaum has proposed, coding is a form of
worldmaking, in which the coder defines how that world operates
Making the transition to thinking and practicing in the realm of symbolic
computational literacy, we can more easily seek solutions that promote broader
access through reduction of technological complexity (e.g., minimizing reliance on
databases, thereby reducing security risk and maximizing access in low bandwidth
environments). The idea of a reduction in computation has two useful and concrete
senses for us: first, literally less code or fewer bytes, and in turn, less
computational processing time or capacity. This notion of doing more with less has
been fundamental to the development and teaching of computer science, where
students are introduced early on to the concept of Big O notation,
which
highlights the importance of ever more efficient algorithms to accomplish a given
task. We also see this drive towards elegance
in the history of UNIX systems,
and in some of the foundational forms of computing that are still in use today in
digital humanities. A reduction in computation implies a reduction in energy
consumed, storage, and labor. In the same vein, we know that computation itself
allows us to perform many tasks that could be done by hand but would take
substantially longer. Take for example the creation of a works cited page: we can
either construct it by ourselves, or we could let software like Zotero write it
for us based on our bibliographic data. While by no means wedded to reduction in
computation or the substitution of manual labor by computation as requirements,
minimal computing asks us to imagine how these might help us accomplish our
various scholarly tasks in the humanities.
Implied in the first two tensions is a crucial third: the tension between choice and necessity. In her work at Salem State University, Roopsi worked with minimal hardware, created small static sites, and processed small data sets out of necessity, born from working at a resource-starved public university. Alex, in his work at Columbia University, did so often by choice. In her new position at Dartmouth College, Roopsi will have the choice to work with expensive software or cluster computing — a choice that Alex had at Columbia and will have at Yale University. Both of us can take for granted high speed Internet access provided by our universities, as well as access to a reliable power grid. Even accounting for the major disparities in resources between the institutions where we started our careers, we recognize that our work in digital humanities has been undertaken in relative privilege not shared by our colleagues around the world.
Choosing to do otherwise — to pursue maximal approaches to digital humanities — is
not inherently noxious. However, the humanities, charged with the interpretation
and stewardship of human culture writ large, cannot afford to ignore scholarship
developed under substantial constraints or circumstances. Otherwise, we will only
reproduce and amplify the exclusions and biases that colonialism and
neocolonialism have produced in the analog cultural and historical record
The tensions we have outlined arise out of unfortunate political, historical, and
economic circumstances that provide urgent and appropriate ground for minimal
computing practices and theory to thrive. This larger set of circumstances is
beyond humanities practitioners to resolve, but they certainly provide the frame
and fuel for much of our practice today. We would be remiss, however, to not
acknowledge the relationship between the climate crisis and the development of
computational technologies. Computing emerged
To be clear, our investment in minimal computing comes not from a fetish for
computational reduction or a bias against databases or supercomputing but from a
very real fear that reliance on these technologies is foreclosing the
possibilities for the development of a digital cultural record that includes the
voices and stories from communities that have been elided in the cultural record —
like our own.
The construction of this new record of the human past, present, and future, as we
have also suggested, is gravely affected by socio-technical inequities — not every
cultural heritage or scholarly outfit around the world has access to the same
infrastructure or resources. In response to this, minimal computing encourages
solutions that can be implemented universally. The irony of two people trained as
postcolonialists advocating for a universal is not lost on us. This is the same
paradox that many anti-colonial and postcolonial writers explored in the 20th
century: to seek the universal in the recovery of the particulars that were
ignored in colonial archives and rendered invisible by the totalizing impulse of
European colonialism and its Enlightenment
A quick overview, in the largest of broad strokes, paints a concerning picture of
the scholarly record at a planetary scale today. The European, North American, and
East Asian dominance in the production of knowledge is evident in terms of brute
quantity. Concomitant with this state of affairs, several pirate operations have
arisen from the ashes of the former Soviet Union and its satellites, creating
libraries that give access to a large part of this knowledge to the rest of the
world with access to the Internet.West
to the
rest — mimicking flows of colonial power from center to periphery — without any
clear sign that the movement is reciprocated through uptake of scholarship from
the Global South.
At the heart of this state of affairs is the role of capital in the control of
scholarly production. Open access and pirate enterprises point away from the
accumulation of capital, and as such, clash with the monopolizing tendencies of
the North Atlantic and East Asian models of knowledge production, which coincide
with larger expenses in computational infrastructure. It is, perhaps, no surprise
that such tendencies have led to the knowledge cartels of the Global North
attempting to co-opt open access through article processing charges (APCs) levied
on authors to make their scholarship open access. Minimal computing intervenes by
studying and creating modes of production that promise control at the local level
for those who wish to avoid absorption by capital, who don't want to cede their
intellectual integrity to the pressures that come with that absorption, and whose
work is suppressed because it does not align with that which capital values (i.e.,
knowledge production beyond the Global North).outside
capital.
When we speak of knowledge production, we no longer speak simply of the production
of documents. We include the production of data, data sets, and documents as data,
all of which can be used for algorithmic analysis or manipulation. The issue of
control over data sets, especially those that can inform the pasts of whole
demographics of people in the world, will certainly come to a head in the 21st
century. One example of danger is control over Black data. At the moment of
writing, the vast majority of the data and documents that help us understand the
history of Black people during the period of Atlantic chattel slavery are
controlled by predominantly white scholarly teams and library administrators or
white-owned vendors.
All of this scholarly and cultural work by necessity implies the need for different labor arrangements. Large or wealthy universities and colleges in the North Atlantic, for example, enjoy full-time information technology and digital scholarship teams working in and outside libraries that provide a certain degree of stability to the creation of digital collections and digital humanities scholarly projects. A few companies, staff on soft-money, and independent contractors have also joined the fray to provide their technical services. Granting agencies in rich economies have supported the production of digital humanities projects that involve arrangements between non-technical scholars and teams of technologists, often with an unacknowledged systems administration team provided by institutions or a company. In this arrangement the scholars with resources bear little pressure to understand the fundamentals of the labor arrangements that make their projects possible and are alienated from the means of production of their own knowledge. In addition to our concern for those who provide invisible labor, we recognize that those who do not have access to such arrangements will only continue to lag in their ability to keep pace with those who do. By advocating for minimal computing, therefore, we aim to create a more level playing field for the future of a digital cultural record where the voices of those who have been excluded can be heard and valued through a more equitable, collaborative approach to the labor of knowledge production that facilitates their engagement.
In their own ways, articles in this special issue speak to the ways the tensions
of environment, race, access, labor, and control interface with the material
realities of producing digital humanities scholarship under constraints. This
issue features two types of contributions: theoretical essays and case studies.
The five lengthier essays build on the early writings of the Minimal Computing
Working Group, as well as our own writing
The essays explore the theories and practices of minimal computing. We open the issue with Grant Wythoff’s
Equally as important are the case studies, which present minimal computing in practice. They collectively offer an exploration of multiple methods, articulating how they exemplify, build on, and expand minimal computing practices in diverse geographic, cultural, and linguistic contexts. We begin with case studies that offer insight on labor and precarity. Matthew Lincoln, Jennifer Isasi, Sarah Melton, and François Dominic Laramée’s
What do we need?to center not only technological developments but also ethical engagement.
We continue with case studies that examine cross-border collaborations. Sylvia
Fernández’s
(
)
discusses how the project
Taking up minimal editions from another angle, Zahra Rizvi, Rohan Chauhan, A. Sean Pue, and Nishat Zaidi’s case study,
We conclude the issue with case studies that explore applications of minimal computing beyond textual studies. Chris Diaz’s case study,
The AudiAnnotate Project: Four Case Studies in Publishing Annotations for Audio and Videodiscusses their work creating the AudiAnnotate platform, which builds on the IIIF standards for AV to address challenges of engaging with audio through annotation.
As the essays and case studies indicate, we welcomed engagement from minimal computing enthusiasts and critics alike in this special issue. Indeed, our articulation of minimal computing in this introduction is as influenced by the insights and critiques raised by authors in this issue as it was by our previous writing and work with the Minimal Computing Working Group. The most salient critique for us, which demands the most attention, is the technical education (i.e., symbolic computational literacy, or knowing how to code) that reduction in computation requires. Time to learn new skills is a privilege, and technological training is not something that can be delivered in a few workshops or a summer school. Rather, it requires sustained effort with the belief that the time invested will ultimately liberate scholarship from reliance on out-of-the-box tools and open up new possibilities for representation of material from minortized communities in the digital cultural record. Further, as several of the essays and case studies in this issue testify, a complete divorce from expensive or maximal forms of computation prove impossible at present. The relationship with social media, Google and Amazon infrastructure, large databases, and GitHub will continue for the foreseeable future. Despite critiques, which can and should continue to be addressed through our collective work, the two of us still see minimal computing as a space wherein we can explore forms of computation that do not depend on expensive infrastructures and the harmful practices of the centers of capital accumulation in the 21st century. In the final tally, we hope this conversation can serve as one of the loci of inspiration for original local and regional practices that best meet the needs of the workers of the record and a critique of current systems of knowledge distribution and (re-)production of the past — both humanistic and technical.
We would like to thank Lydia Guterman for her assistance with copy-editing this special issue and Salem State University’s School of Graduate Studies for funding Lydia’s graduate research assistantship.