DHQ: Digital Humanities Quarterly
2019
Volume 13 Number 3
2019 13.3  |  XMLPDFPrint

Building Bridges: Collaboration between Computer Sciences and Media Studies in a Television Archive Project

Abstract

This article sheds an empirical light on interdisciplinary collaboration within the Digital Humanities by investigating the daily research practice of the Dutch Digital Humanities-project BRIDGE. The project developed and tested methods for automatically creating meaningful links and expanding archival television data. In the project, a high level of collaboration was required between scholars from two different disciplines: computer sciences and media studies. The majority of the epistemological encounters between the two disciplines took place in the design of the developed tools and the user studies to test the tools. The article is based on structured conversations between the two central staff members in the project, i.e. the computer science PhD-student and the media studies postdoctoral researcher. By unravelling the research project as a process of confrontation, identification and acknowledgement of situated knowledges, the article shows when and how the boundaries between the two disciplines have been maintained, crossed and blurred. The authors point to the benefits and challenges of interdisciplinary collaboration in the Digital Humanities, and formulate some best practices for future Digital Humanities-projects.

1 Introduction[1]

Computer sciences and humanities are rightly considered different scholarly disciplines and they use different languages, approaches and paradigms. It comes as no surprise, therefore, that mutual misunderstandings, unbalanced expertise levels, and lack of interest in the other discipline are issues that regularly occur within Digital Humanities-projects that typically work on a new technology and humanities-based research question. In their anthology, [Bartscherer and Coover 2011, 2] define this core challenge for Digital Humanities research as follows: “Mutual incomprehension persists. Generally speaking, scholars and artists understand little about the technologies that are so radically transforming their fields, while IT-specialists often have scant or no training in the humanities and traditional arts”. Insight into collaborative research practices, and interdisciplinary collaborative research practices in particular, can shed light on the quickly evolving field of Digital Humanities. What does it mean to “do” Digital Humanities? In this article, we look at the practice of interdisciplinary collaborative research between computer sciences and humanities based media studies.
A case in point is the Dutch television archive project BRIDGE (Building Rich Links to Enable Television History), a collaboration of the Information and Language Processing Group of University of Amsterdam, the media studies group Centre for Television in Transition of Utrecht University, and the national radio and television archive The Netherlands Institute for Sound and Vision. The project showcases the synergy of the two academic disciplines involved, computer sciences and media studies, which collaboratively developed and tested digital tools for exploration of archival television data. Collaboration in DH is often, as [Flanders 2012, 67] states, “aimed at building something that works — a tool, a resource, an online collection”. The focus of the research in the BRIDGE-project, however, was not solely on building digital tools for the humanities, but also on the scientific evaluation of tools and algorithms, as part of computer sciences, and a study on how such tools and algorithms may advance insights in the field of Media Studies. The main output of BRIDGE is a PhD dissertation in computer science, publications in both humanities and computer science conferences and journals, and prototypes of tools (which are eventually developed into sustainable tools in follow-up projects [2]. This very interdisciplinary and evaluation-driven set-up makes the project different from many other DH-projects in which the tools, the technological infrastructures and the humanities research seem to be central.
Many scholars have investigated collaboration practices in the broader setting of technology and software production. [Barry et al 2008], for instance, conducted a large-scale critical comparative study of interdisciplinary institutions based on ethnographic fieldwork and an Internet-based survey. [Lin 2012] bases her article on an autoethnography of one team member during the development of software. Most studies use qualitative research methods (notably social science-based methods such as ethnography and interviews) to produce case studies that focus on how members of a project that involves more than one discipline (struggle to) communicate, negotiate, and cooperate. In this article we also report on an auto-ethnography, not by one humanities or social science researcher who observes scientists and engineers, but by a humanities scholar and a computer scientist who observe themselves, “the other”, and their collaborative research.
Instead of a methodological account of the technical development and testing of the tools, we take a meta stance and reflect on the project’s day-to-day steps in the preparation for, execution of, and reporting on the research carried out. In this article, we revisit these research processes. The main focus of the research process was on conducting user studies. The technical results of these user studies are already published in computer science outlets ([Bron et al 2011], [Bron et al 2012a], [Bron et al 2012b], [Bron et al 2013]).[3]
Taking an empirical glance at the digital humanities, thus at “doing digital humanities”, we show how the boundaries between the scholarly disciplines computer sciences and humanities-based media studies in BRIDGE are maintained, crossed and blurred, and to what extent transdisciplinarity emerged. By making the research process more transparent, this paper aims to demystify disciplinary boundaries, to reflect on the production of interdisciplinary knowledge, and to provide a future look into Digital Humanities research by providing suggestions for best practices.

2 About disciplines and situated knowledge

At the core of interdisciplinary collaboration is the very notion of discipline. Defining a discipline requires caution as it has different interpretations. We follow [Menken and Keestra 2016, 27] who pragmatically define a discipline as “a field of inquiry with a particular object of research and a corresponding body of accumulated specialist knowledge” (italics by authors). Simply put, academic disciplines are created by groups of scholars as a form of organization of their knowledge about a certain object. The very boundaries and definitions of disciplines vary over time.
Over the course of the past four decades, interdisciplinarity has been widely discussed in academic literature. Terms such as multi-, cross- and transdisciplinarity are used to describe different interactions between disciplines. All terms can be situated on a continuum: while multi-disciplinary means that there is just communication, but not so much mutual exchange, “cross-” already means some more exchange, while “trans-” means that there is a creation of something new that surpasses the boundaries of disciplines. According to [Hessels and van Lente 2008, 741], transdisciplinarity goes beyond interdisciplinarity as the first is much more dynamic. In other words, transdisciplinarity can be defined as the radicalization of “existing disciplinary norms and practices”, “allowing researchers to go beyond their parent disciplines, using a shared conceptual framework that draws together concepts, theories, and approaches from disciplines into something new that transcends them all”  [Roseneld 1992, 1351]. Similar to multi- and cross-disciplinarity, transdisciplinarity is not a given, but a process, something that is created and constructed over time.
Knowledge, too, is not a given, but a construction of which the outside-inside boundaries can be theorized as “power moves, not moves toward truth”  [Haraway 1988, 576]. In a constructionist view, knowledge claims are a continuous struggle for meaning, which makes it meaningless outside its own context. Yet, we approach the collaboration between two disciplines and interdisciplinary knowledge production as an Actor-Network, as coined by [Latour 2005]. Actor-Network Theory (ANT) addresses the construction and transformation of human values and knowledge and the notion of the actor as agent [Giddens 1991]. According to [Latour 2005], agents are individual entities that assert force while interacting with other entities. They form what Latour calls actor-networks. Agents can be both actors, which are human entities, but they can also be actants delineating not only humans, but also non-human entities, such as interfaces. Agents are always bound to other agents acting together and therefore the output of the entire actor-network cannot be predicted by any agent separately. When investigating interdisciplinary collaboration, then, there is an interaction between two disciplines in terms of knowledge negotiation between agents.
A good addition to Actor-Network Theory can be found in feminist science studies. To pinpoint the constructivism of knowledge, Donna Haraway coined the concept of “situated knowledge”  [Haraway 1988]. The concept is related to science studies and was developed by Haraway in response to Sandra Harding's The Science Question in Feminism [Harding 1987], in which the latter formulated standpoint theory. In a nutshell, these feminist scholars stress the role of the situated researcher and situated knowledge to reflect the importance of diverse ways of knowing for improving the production of knowledge. Situated knowledge, then, means that all subjects have embedded positions: they are part of and actively co-construct a disciplinary-bound knowledge. In this context, [Haraway 1988, 188] launched the “greased poles dilemma”: it is hard to climb a pole when you are holding onto both ends of the pole. The pole is a metaphor for the epistemological question (i.c. what is regarded as acceptable knowledge within a discipline): the duality between positivism and empiricism on the one hand and social constructivism on the other hand. She argues that a way to overcome these dualities is to articulate “partial visions”, and thus “situated knowledge”. Therefore, these scholars are advocates of multi-method approaches [Naples and Gurr 2013].
In terms of our endeavour: how is the knowledge building of the media studies scholar related to the knowledge building of the computer scientist? How can both “situated” scholars collaborate, if they can? In our article, we will show that doing Digital Humanities research is a process in which holding on to each end of the pole is the intrinsic position at the beginning of the climb, but that the key to collaboration lies within identifying, acknowledging and learning about the situatedness of the knowledge of the self/own discipline and that of the other researcher and discipline. Doing Digital Humanities, therefore, is not only about using tools to produce papers or developing tools/interfaces per se, it is also (and perhaps chiefly) about the process (and attempts) of making research and tools transparent by making one's own and the other’s discipline’s assumptions explicit. In other words, it is a matter of building a bridge between both discipline-specific ends of the pole, walking on the bridge and (trying to) look at one's own and the other discipline-specific end from another perspective, as to acknowledge their differences and similarities.

3 Methodology

The article is an analysis of the daily research practice by the two central staff members of the project: one researcher in media studies, and one researcher in computer sciences, a (double) self-reflexive account [Giddens 1991].[4] The postdoc in media studies has a specialisation in Film and Television Studies. The computer scientist is specialised in Information Retrieval. The analysis in this article follows a unique approach as the authors are also the subjects of the research conducted for the article.
In order to reconstruct the research steps, we used different types of data: on the one hand e-mails and on the other hand a series of semi-structured interviews. We read our e-mail conversations first, and then the humanities postdoc prepared a set of interview questions and interview topics that formed the basis for a conversation between postdoc and PhD student in the form of semi-structured interviews. This was done three times: halfway through the project (year 2), at the end of the project (year 4) and at the start of the writing process of this article (year 5).
In the data (emails and transcripts of interviews), we look at what we - the two central staff members on the project - say and how we say it; which words we use. Latour, as [Couldry 2008] notes, does not provide a way in which the agents interpret their networks. To interpret the networks and the researchers’ discourses, we rely on the theories of Stuart Hall (1992) on identity as discursive construction. He approaches identity formation as a process of “becoming” rather than “being”, a process of identication that is constructed on the basis of recognizing certain characteristics shared with another person, group or an idea. “Othering”, then, is a form of defining the self: when making differences with the out-group explicit, the in-group defines itself. This implies that words such as “we” versus “them”, or “I” versus “you” are important markers of identity construction. Therefore, we specifically look at the use of these pronouns to see how the researchers’ disciplinary identities are being constructed and how knowledge is being formed.
The process of othering, however, puts constraints on the writing of an article across disciplines. It is a true challenge to write an article from two different viewpoints and paradigms. Therefore, we needed a new way to cope with this. Following Bartscherer and Coover’s book “Switching codes: thinking through digital technology in the humanities and arts” [Bartscherer and Coover 2011], we believe that a conversation structure of the empirical sections enhances and highlights not only the collaboration in the project but also the very act of writing this article. While the interviews are quoted as “I” and “you”, the article itself is written as “we”. The “we” are both authors, but as this is a humanities journal with (largely) a humanities readership, we take humanities discourse and concepts as leading. For the description of the research process, we use the third person (he/she/the researcher/they) in order to identify ourselves as research objects for the sake of this article. As such, the article is an expression of standpoints (in plural) of which the situatedness is made semantically explicit by quoting ourselves and using the third person. In the conversations, the quotes by the Computer Science PhD student will be indicated by “CS” (while in fact it is Marc Bron). The Media Studies postdoc researcher’s quotes are indicated by “MS” (while in fact it is Jasmijn Van Gorp). The article is written from a multi-disciplinary standpoint, by means of the collaborative software WriteLatex, in itself a way of doing collaborative research. Therefore, the article is in itself, to quote [Latour 1987], “science in the making”.
In the following empirical sections, we present a discursive analysis and deconstruction of the interaction between agents, where different backgrounds lead to misunderstandings but also to interdisciplinary knowledge production. We divided our analysis into four chapters which (humanities-wise) correspond with the classic narrative structure of Hollywood films: establishing shot, encounter, conflict and reconciliation. In section 4, we set the scene by elaborating on the basic characteristics of the two disciplines computer sciences and media studies. In section 5, 6, and 7 we follow a chronological order, corresponding with the identified three research phases in the project: the encounter, the struggle and reconciliation. For each phase, we first outline all research steps, then we discuss the challenges encountered and collaboration strategies per research step in a conversation structure. We conclude with some best practices for collaborative digital humanities projects in section 8.

4 Setting the scene: the two disciplines unravelled

In order to establish the context of the interdisciplinary collaboration, we define in this section the disciplinary characteristics of our own disciplines as viewed by ourselves. That is, as “disciplines” are contested concepts and continuously rethought, it is important to know how we ourselves experience the defining characteristics of our very own disciplines. Each one of us wrote the section concerning our own discipline, also referring to existing literature where relevant. In Table 1 we give an overview of the main identified characteristics of the two disciplines. We look at the object of research, methods of knowledge building, research methods, pace, and publication traditions.
Computer Sciences (CS) Humanities based Media Studies (MS)
Object of research Information systems Media and Culture
Knowledge building Discovering structures Associative
Main methods Experiments, computer-led, quantitative Interpretation, qualitative
Publication pace Fast Slow
Research / authorships Team Single
Most valued publication outlet Conference paper Monograph
Table 1. 
Overview of how we define our own discipline
The distinction between sciences and humanities is often referred to as the objectivity of knowledge versus experiential aspects. Or, to put it differently, the sciences cumulate knowledge until a structure has been discovered, whereas the humanities' work is associative and particularistic [Bakhshi et al 2009, 10].
Computer sciences is a science discipline, and Information Retrieval is one of the main subdisciplines. Information retrieval (IR) is finding material (usually documents) of an unstructured nature (usually text) that satisfies an information need from within large collections (usually stored on computers) [Manning et al 2009, 1]. A related discipline is Information Studies, which is concerned with how people interact with systems and how those systems can be improved [Bakhshi et al 2009, 6]. This discipline is closer to the Humanities, as it also involves qualitative methods such as observation. Although media studies can be situated both in humanities and social sciences, the majority of television research (still) takes place in humanities departments. Media studies has the media as object of analysis, either in a centralised or decentralised way [Couldry 2005]. Media studies researchers use a variety of data: textual data such as newspapers, Twitter feeds and magazines, and audiovisual data such as films, radio, games and television programs [Bron et al 2016]. Evidently, the sciences are also slipping into Media Studies (and vice versa), which is most clearly seen in New Media Studies, which concern the study of Social Media, computers and information systems such as mobile communication. They often use quantitative “big data” analysis, but are often adding a critical interpretative layer to it.
In terms of methodology, the two disciplines also differ. In Computer Sciences, computer-led experiments are the core, as well as creating new algorithms. Computer sciences measure and use (solely) quantitative methods. The basic skills a computer scientist should master are: good understanding of theory, algorithms, computer design and programming languages. For the humanities-based Media Studies, the individual researcher is much more central. The researcher’s own interpretation is key. Qualitative methods are better suited for the humanistic stance in the humanities [Creswell 2014]. Some skills a humanities researcher should master are: critical reflection, interpretation and theoretical conceptualisation.
The practicalities of the two disciplines also differ. Computer scientists often work in teams of 2 to 6 people, running experiments that are supervised by a professor. Computer science is a quickly evolving field that wants to publish results at least every six months in order to be first. First, they write a conference paper of usually 10 pages, which is peer-reviewed and, if accepted, published in proceedings. A common follow-up is to write a journal article on those aspects of the conference paper that need more in-depth analysis. Given these factors, the scientific output of computer sciences, as in other “hard” sciences, is high in comparison with the humanities.
Humanities research is sometimes defined as “lonely” research, or as [McCarty 2005, 1] describes the humanities researcher: a “lone scholar”. In the humanities, scholars have tended to be physically alone when at work because writing tends to be by nature a solitary activity. In the humanities, the research often takes place during the writing. The argument of an individual (or duo-) based (humanities) versus a team-based research (sciences) tradition can be easily made by looking at random issues of Media Studies journals as Cinema Journal, Television and New Media, and monographs in the field of film, television and media studies. Edited volumes seem to be more collaborative, but the conference of Society for Cinema and Media Studies, i.e. the major American conference in the field of film and media studies, has a majority of single authored papers and does not give the opportunity to include more than two authors on papers. For research assessment, some Humanities departments do not take publications with more than x number of authors into account, while other Humanities departments encourage co-authored articles by giving them more weight than a single-authored article. Furthermore, for humanities-based media studies, the single-authored monograph is an important outlet of scholarly work which partly explains the slow publication pace of the discipline in comparison with computer sciences.
We are perfectly aware of the fact that our own definitions and characteristics of the disciplines as put here can be contested; however, these are the differences as perceived by us, as we encounter(ed) them in our very own European, national and time-specific contexts. This self-defined overview of differences between the disciplines, provides the starting point, the “point zero”, showing where the following discourses stem from and, more importantly, where the challenges can be situated when it comes to interdisciplinary projects and where transdisciplinarity may arise. The very research direction in which both disciplines met and the discursive field that surfaces as a result of interdisciplinary interactions and negotiations between the agents involved can probably be situated in what [Hartley 2009] calls “Cultural Science”: a movement that challenges the current disciplinary distinctions between the humanities and social sciences on the one hand, and the math-based sciences on the other hand. It can as well be defined as “Digital Humanities”, of course. Whatever concept is used, we tackle the disciplinary boundaries between the sciences and the humanities in the next sections from the double perspective of humanities and computer sciences. The question, now, is how the project evolved and how the researchers of the two disciplines may have come closer to each other.

5 The encounter: two disciplines, limited interaction

5.1 Description of the research process

In the first year, 2009, the PhD student in computer sciences was the only staff member. Research was focused on entity search and algorithms for linking entities, i.e., exploring algorithms for a specific task: entity search. A good example of how entity search is used in modern search engines is knowledge cards, i.e., for searches about people, companies, or locations, not only a list of 10 result links is presented but in specific cases additional information is presented. For example, for the search terms “Michael Jackson” pictures, links to his work, and a short biography are shown for the famous artist. The algorithms guess which person the user intended to find, i.e., the artist and not any other person named Michael Jackson[5], and that collected information about this person was the subject of study.
At the start of the second year, 2010, when the postdoc entered the project, the team was complete and could start their interdisciplinary research. An important practical observation regarding the project we have to add here is that the computer scientist and media studies scholar did not reside in the same building, but work at different universities and even in different cities. The computer scientist worked in a large industrial science campus outside Amsterdam; the media studies scholar worked in a historical mansion as part of the humanities faculty of Utrecht University. At the start of the project, they did not meet regularly, as they were also both living in the cities where their university was located.
In order to develop a new tool, the first step was to understand how media researchers use their current main tool to acquire research material, i.e., the existing tool (catalogue) of the national radio and television archive, called “iMMix” (zoeken.beeldengeluid.nl). For the first study within BRIDGE, the team set out to investigate how “the known item search system” iMMix can be used for an exploratory search task in order to compile a list of features that would be desirable for a new search system. iMMix formed the “baseline”, a concept which is common in computer sciences but not at all in the humanities. Baseline means that it is the set that is the point of reference, the “standard system” that will be contrasted with the “new” system. The latter, then, should perform better than the standard system.
The online search system of the The Netherlands Institute for Sound and Vision, iMMix, consists of metadata descriptions of around 1.2 (currently 1.6) million television and radio programmes of Dutch public broadcasters, provided by professional annotators. It has a simple and an advanced search option that allows users to search on specific fields. After the search results have been displayed, they can be refined by a filter system. However, the search system is developed for an experienced group of users: the broadcast professionals. Broadcast professionals, in their need to reuse material, often know what they are looking for, using a directed search, for instance, by providing a title or specific content. Consequently, iMMix mainly supports “known item” search and can be considered as a “known item search system”. Media researchers, however, would benefit from an “exploratory search system”, a system that supports them in browsing, learning about the representation of their research topic in the collection, and jumping from one topic to another [Bron et al 2011] [Bron et al 2012b].
As the test person for a pilot study with the standard system iMMix, the researchers were looking for a media researcher with no or just a little experience with the search system and someone with a “limited” prior knowledge of the archival content in the archive, so s/he would have to rely heavily on the search system for support. The team did not have to look far to find a volunteer. The postdoc was the ideal candidate: a Belgian postdoc in media studies who had never worked with the Dutch iMMix before. The team set up a small experiment, in which the postdoc had to do a search on a self-selected topic. During the search, she had to think aloud and this in the presence of her fellow researcher of computer sciences who observed her search behaviour. The search was being logged, videotaped and sound recorded. She searched for “Russia” initially and ended up with three clusters of programmes for further scrutiny: two travel reports, two programmes about migrant children and two current affairs programmes (see [Van Gorp 2013] for a detailed first-person account of the 2-hour search session).
By then, it was time to conduct research for a computer science paper, as part of the PhD-project in computer sciences. As users the researchers chose to work with 22 media studies students (freshmen). They had to find five television programs for three tasks in a limited period of time, by using the iMMix search system. The tasks for the users have been designed, by varying the difficulty in terms of prior knowledge. The three tasks were defined in such a way that the students could not use any of the elements/terms given in the task to find correct answers and had to extensively explore the archive in order to do so. The first two questions were situated in contemporary times, while the latter was a historical one: (i) find 5 television programs with comedians from a non-western country; (ii) find 5 episodes of drama series in which location plays an important role; and (iii) find 5 game shows with a female host broadcast during the 1950s, 1960s or 1970s. For the first task for example, students first had to find names of comedians of a non-western country, while not being allowed to use any other tool or search engine besides iMMix, before being able to find correct answers/television programs. After completion of the three tasks (or when the maximum 10-minute time limit per task was over), students completed a survey that also contained open questions. They also conducted four in-depth interviews with students after the computer experiment. The study showed that the search system iMMix did not provide the necessary support for exploratory search by (student-)researchers (see [Bron et al 2011] for a detailed account).

5.2 Conversation about the research process

In the first year, there was only one researcher. The computer science PhD-student recalls: “It was really informatics research; there was no humanities aspect in there. The task has been evaluated on a benchmark set, created articially.” After one year of “hard core” computer science research, the arrival of the humanities postdoc in the project changed the scene.
The very first collaborative action was the observation of the postdoc. For the computer scientist, it was very inspiring to observe a media researcher at work:
CS: “It was such an eye opener for me. You did totally other things than I expected. What you did was totally different from the way that we evaluated our entity links. I was immediately inspired and knew how to improve the interface and how we could improve our algorithms. I realized that what we did before, making lists of entities, is not that relevant for the search process of media researchers.”
Example 1. 
The media studies postdoc, however, was initially very surprised that the observation was so helpful for her computer science colleague. Two years later, she started to see what was so illuminating about her very first search when she took the videotapes from the shelves and started to analyse her own search. By retrospectively observing her own search behavior, she understood that the relation between (re)search object, research question and search behavior of humanities scholars is quite particular. It resulted, amongst others, in a reflective article on her very first search in the archive [Van Gorp 2013]. In the end, therefore, the observational study of the postdoc using a think aloud procedure turned out to be quite a “ground breaking”, thus effective, way to get to know each other’s specificities and to get introduced to each other’s discipline.
However, back then, the observatioal study of the postdoc was considered as rather limited: it did not provide the sheer size/quantity that is usually required for a computer science paper. Therefore, the PhD-student wanted to conduct a user study with a high number of participants to have a “baseline” measurement, in order to be able to measure the improvement of the future (to be developed) “exploratory search system”.
CS: “I was used to evaluate things, so I wanted to evaluate, evaluate, evaluate. By then, we thought it would be possible to “measure” how well the old system iMMix worked and then how well our new system works. Pretty naive.”
Example 2. 
The postdoc remembers that she did not really understand the purpose of the user study back then. Words such as “baseline measurement” and “exploratory search system” did not ring a bell.
MS: “I had really no clue what you wanted to do with the user study, and what you meant with “exploratory search system”. And then, at a certain point, I interpreted “exploratory search system” as something that has to do with cultural memory. I translated it as a system that inspires users to formulate new search terms; to provide them with inspiration in case their cultural memory is not sufficient to find television programmes. It is a kind of memory machine: it gives suggestions for new search terms.”
Example 3. 
CS: “I really had no clue what you meant with cultural memory, and I am still struggling with it.”
Example 4. 
For the postdoc “exploratory search system” was a fuzzy concept, while “cultural memory” was a vague concept for the PhD-student. Paradoxically, it was this very semantic gap that provided inspiration for the humanities scholar to give a “humanities” touch to the user study. The humanities postdoc eventually wanted to do an empirical study of the concept of cultural memory by conducting qualitative interviews with students.
MS: “I was trying to find ways to make the study also interesting for my Media Studies research. So, I insisted on doing some in-depth interviews with the students to complement the quantitative results. I did not see how quantitative data such as mouse clicks could possibly fit into a media studies research design. Of course, now I do know. But back then I really considered you and your goal as “Other”, something I did not understand.”
Example 5. 
In the end, the four interviews did not provide sufficient ground for a separate humanities paper, so both researchers worked on a computer science paper together. The result of the study gave some insight in the search behavior of students, but the scholars both realized that the quantitative measurements as well as the four interviews with students were not sufficient to fully grasp the particularities of search and search systems of media researchers. They needed a different approach for their next user studies. However, they did not realize back then that the “seed” (or “solution”) for a better set-up of a user study was exactly the combination of quantitive measurements with qualitative interviews, as shown in the next sections of this article.
While the semantic gap provided inspiration in some occasions, it also impeded the progress. There was a lack of semantic interoperability. For example, as preparation for the user study, the postdoc had to create tasks for the students, formulate “queries” for the interface.
MS: “I remember that I had to formulate queries. I really had no clue about what “a query” meant, so I used the word “key word” instead. But “key word” to you meant something totally different, so we had the one misunderstanding following the other.”
Example 6. 
A query is a request for information from a database, such as “Comedian AND non-Western”. A key word, in computer science and information studies language, however, are the tags attributed to documents. In the case of the iMMix interface, the keywords were the ones on display in the metadata field “keywords” of television programmes. While the postdoc did not really understand why she had to formulate the tasks for the students in a computer query language, the computer scientist did not really understand why the postdoc was referring to tags all the time.
In general, the staff-members of the project really had to get used to each other, and were still focused on their own field: CS: “The first phase was interesting but also frustrating. I really had to get used to the way humanities researchers work and it was unclear how we could benefit from each other.”
MS: “For me, it was quite a shock, too. I was used to do my own individual research, to take my time. I was not used to create something other than a paper, and to work together with someone who works in a totally different paradigm. It was exciting but at the same time I was really worried, admittedly. I wanted to write a monograph as end-product of the project, or at least conduct research with the tools which were about to be developed, and I did not see a way how this would happen within the framework of computer science experiments. I was afraid I would not have the time to conduct any humanities research.”
Example 7. 
Both researchers define the other discipline as “other” and also talk in terms of “us” versus “them”. The second user study was designed while the two central staff members had two different goals in mind. There was only limited overlap or common understanding between the two disciplines. Each researcher regarded his or her own discipline as the most important. For the computer scientist, it meant that there should be enough participants to make the statistics reliable. For the humanities postdoc, it meant that she tried to obtain some interesting qualitative data. In terms of situated knowledge, both researchers held to their own end of the pole.
The computer scientist, who started a year earlier, was already able to see the contours of the humanities pole and already started building a bridge. He was excited about the possibilities, and started to learn about Media Studies. He also realized that they needed a different approach to deal with humanities scholars. The media studies scholar who started a year later, on the other hand, firmly held on to her pre-defined end of the pole which is humanities, and was not planning to build a bridge, let alone walking on it. The “other” discipline was still looking “alien” to her in her first six months of the project. The postdoc was not really accustomed to the computer science working style yet, and was convinced the humanities part of the research (i.e. conducting research with the new tools they would develop) was much more her cup of tea. The level of collaboration started to improve in the next phase of the research project.

6 The struggle: a productive negotiation process

6.1 Description of the research process

After one year of the “joint” project, the researchers gradually started to really collaborate. The main aim was the development and testing of the exploratory search system MeRDES (Media Researchers’ Data Exploration Suite). It is in this phase that the two disciplines started to be transcended.
First of all, the idea for MeRDES emerged when the two staff members were both attending a talk of one of the creators of Google’s N-gram Viewer. Within 30 minutes the researchers realized that it would be really good to have such an n-gram viewer for the television archive as an “exploratory search system”. And so, they started to develop a kind of N-gram viewer for the television archive, which they called Media Researchers’ Exploration Suite, abbreviated to MeRDES[6] (see figure 1).
MeRDES is specically developed for media studies research and has two goals. First, it provides users with support for exploration, i.e., support in formulating key words and exploring various aspects of a topic (i.e., a tool for retrieval). A second aim is to provide support for discovering patterns in the data (i.e., a tool for analysis). As data for the search engine, we used the same large metadata set as is used in iMMix. They incorporated two side-by-side versions of a standard exploratory search tool that contains cloud filters. Based on Google’s N-gram Viewer, the researchers added a timeline visualization, and a term statistics visualization in which the characteristics of the result sets obtained with each side of the tool are shown and can be compared.
In order to develop MeRDES, two focus group discussions with media researchers were conducted at two different media studies institutes. The team showed them a mock-up of MeRDES and asked for feedback. Once the new tool was developed, they tested a first version by means of a usability study with and tested by 30 computer science students (freshmen). The feedback was given to the software developer, who improved the system. With the improved system, they were able to conduct a remote online user study with 39 media studies scholars from the Netherlands and the Dutch speaking part of Belgium. After having viewed a tutorial video and having practiced with the tool, the media researchers were given the following search task in this third study: “Imagine that you have to write a research paper on the portrayal of migrants on Dutch television. Try to find relevant programs in the television archive. The goal of exploring the archive is formulating the initial research question for your paper”. After the search session, the users were asked to provide a research question and also feedback in a post-test survey, both by means of check boxes and open questions. The study showed that the double (“subjunctive”) interface supported a wider variety of research questions. For an elaboration on the methodology and the results of the user studies, see [Bron et al 2012b].
Figure 1. 
In parallel with the development of MeRDES, the researchers started conducting interviews with media researchers: PhD students, postdocs, assistant, associate and full professors. They asked them to pick one of their own research projects and reconstruct their research process. This part of the research was “purely” qualitative, and resulted in a peer-reviewed article about the research process of media researchers and the position of the research question in this process (see [Bron et al 2016]).

6.2. Conversation about the research process

Although both researchers shared the idea about the usefulness of a tool providing an N-gram viewer type functionality, they had very different ideas about how such a tool would operate, which impeded the development of the tool. Retrospectively, it was really a struggle; one of the most difficult episodes in the project.
While the computer scientist started to understand and feel comfortable with the field of media studies, the media studies postdoc had to adjust to the speed and way of working.
MS: “The development of MeRDES felt really frustrating to me because the development went so fast. There was no time for me to really read literature and theorize the tool. But we had planned a user study, so we had no other choice than speeding up. The worst thing that could have happened is that we had 39 scholars available to test, but no tool. It really felt as a high speed train. I was not used to do things “now”. If I had deadlines when I was still doing humanities research, I had at least two weeks, and most likely one month. I also remember you were always asking me “what is the time line”? At first, I thought you were referring to the timeline graph in the interface, but then I realized you meant planning with it. And now I really know why it was so important to have a strict planning.”
Example 8. 
The postdoc had difficulties in following the pace. The only way to cope with it was to plan more strictly what she did.
Most difficulties were situated — again — at the level of a lack of semantic interoperability. Another good example to illustrate the mutual misunderstanding is the different take on television genres. The problem was that the iMMix catalogue listed about 300 genres ranging from “game show”, “quiz show”, “quiz”, “amusement”, “Amusement” (with capital) and so on. These television genres contained subgenres, sub-subgenres and also labels media studies scholars do not use. The media researcher, therefore, wanted to make a new division of genres which better matched the needs of media researchers. She knew that media researchers really want to select and mix and match their own television genres. The computer scientist in the project perceived “genre” just as one of the many metadata fields of the television archive, just a facet, similar to “producer” or “channel”. This view really conflicted with that of the media researcher who considered genre as one of the backbones in film and television studies research. They had a long, persistent discussion about how the genres should be implemented in MeRDES.
MS: “For me, it should be clear how the genres were related to the television programmes; so I wanted a separate visualization. For you, genre was just one metadata field. Negotiations ended up in keeping the metadata field genre, not in a separate visualization but as tab of the word cloud.” (see figure 1)
Example 9. 
CS: “In the end, I wrote in our paper: “genre is a very important concept in media studies”, just because all media researchers were insisting on it; during the focus groups, the interviews and you during the development. Genres had to be correct; that had to be precise.”
Example 10. 
The project members had a similar misunderstanding with “snippet”. It took about three months before they realized that they had a mutual misunderstanding about it. The computer scientist wanted to show snippets of television programmes in the interface. Result snippets are the short summaries of a document, designed to allow users to decide its relevance. Typically, a snippet consists of the document title and a short summary, which is automatically extracted [Manning et al 2009, 170].
MS: “Somehow I associated “snippet” with birds and twitter, as the words sound similar. And so I thought snippets were pop-ups or mouseovers. I did not understand why we needed them in the interface at that point. I could have benefited from a beginner’s course in information retrieval and search systems.”
Example 11. 
Indeed, the lack of semantic interoperability impeded the development of the tool as it costed time.
On the other hand, the semantic gap also had its function: it forced the researchers to define terms and concepts, which enhances transparency and resulted in a better design of the interface. The collaboration was also supported by the fact that the media researcher moved cities and now lived in the same city as the computer scientist’s university campus. They met more regularly, in person, at the computer science campus which helped in decreasing the mutual misunderstandings. At the computer science campus, the media studies scholar also observed how the computer scientists work, in teams. She observed that they are less focused on individual research than humanities researchers are. It made her realize that the individualistic take of humanities is typical for the humanities, and is not necessarily the only way to go about research. She started to see her own situatedness.
The defining, crucial element for this research phase was the involvement of other humanities students and scholars in the project. These steps of involving “users” into the project was the key to increase the level of transdisciplinarity. The computer scientist explains:
CS: “And then our first real thing: the development of MeRDES. We really had to find a strategy to collaborate. I really struggled with my own lack of understanding of your field on the one hand and that the users did not really know what they want on the other hand. We really have many techniques, that are very cool, but we had no clue whether our techniques are useful for media researchers. Media researchers know some techniques, such as google, but they do not know what is possible so they could not really suggest things.”
Example 12. 
MS: “I did not realize how difficult it was for you. You had to grasp the particularities of Media Studies, while I just had to talk with peers – with similar background, lingua and education.”
Example 13. 
It was important to elaborate the group of users from one (the postdoc) to many. The focus group discussions had a double function: they were an efficient way to gather feedback on the prototype and a pleasant way to get to know the defining characteristics of Media Studies.
The user-centred tool development also triggered some difficulties. The computer scientist pinpointed the difficulty in the development of the tool as “different views on the tool’s goal”.
CS: “The major problem is that you and also the media researchers in our focus groups were more interested and fascinated by MeRDES as a tool for analysis than for retrieval. Our mutual misunderstandings were based on that. You and our users in the focus groups were always approaching MeRDES as a tool for analysis, as it generates statistics. I just regarded it as a tool for retrieval.”
Example 14. 
MS: “I did not realize this back then. When I noticed that our target researchers raised some critical questions regarding the interface, I often explained it - to myself - that they did not yet see the full potential of computer sciences for the humanities. I now realize that addressing those critical questions within the interface is key.”
Example 15. 
The media researcher started to see her peers as “other”, and felt more comfortable with the “automated” and “quantitative” aspect of their joint research project. She had started to build a bridge. In a later stage, the critical remarks of the researchers helped her to redesign the interface in a follow-up project.[7]
The remote user study with the 39 media scholars went well. However, it was difficult to interpret the results because “humans” provided the data: mouse clicks and survey results of 39 researchers. This meant that both researchers had to learn about human-computer interaction, a field which they were both not familiar with. For computer sciences, it is difficult to get sufficient statistical power as there was only a “small” dataset (39 respondents) while for humanities, online surveys do not always give satisfying information as they cannot ask follow-up questions and have in-depth interviews to grasp the interpretation strategies of the respondents. They realized that the interpretative element should get even more space in their next user study, as explained in the next section.
The researchers had a strong belief that more interpretative methods were required for the next user study. This belief was especially instigated by the parallel research step of conducting 27 in-depth interviews with media scholars. The interviews were conducted at the work space of the media researchers, so also had an ethnographic value for the computer scientist. The interviews were effective for the computer scientist to get an understanding of what the humanities is all about:
CS: “Computer sciences is a really structured field. My first impression about media studies researchers was that they were quite absent-minded. So chaotic. Like how one of our interviewees explained it during an interview: “then the data, the research question, and back”, while she was drawing imaginary loops on the table. And also during our conversations and our first think aloud study. Then you first wanted to do research on Eastern Europe, then on Russian children and then you ended up with research on Russian migrant children. Skipping from one subject to another. But later I understood that it is just part of an interpretative research process in which humanities researchers try to understand a subject.”
Example 16. 
While the method of conducting interviews is common in the field of media studies, for the computer scientist it was a new way of doing research. It also was a kind of “social” literature study:
CS: “During the first interviews, I was totally puzzled about certain terms. They were talking about “reception”, “audience studies”, “production”, and “texts”. I was surprised to what extent culture is central and questioning what culture is, etc. I really did not understand what they meant.”
Example 17. 
The researchers gave a lot of examples about their research, which helped to understand the discipline-specific terms. The computer scientist learnt a lot of the specificities of media researchers. The media studies postdoc realized that her own (re)search behavior is very typical for Media Studies, which confirmed that her own stake in the project could be considered more or less as “representative” for media studies.
In this phase, a productive struggle, there were clear signs that both researchers started to build bridges. Semantics are one of the most defining parts of a discipline. Learning each other’s semantics is a conditio sine qua for a good, constructive collaboration. In addition, they got acquainted with - for their discipline - rather “exotic” methods: the computer scientist was enthusiastic about the focus group discussions for the tool development and the interviews to grasp the field of Media Studies. The media studies scholar started to see the value of computer sciences for the humanities. She was impressed by the efficiency of computer scientists, e.g. how they communicate and plan, and what they could achieve with technologies and programming. Both researchers started to learn new concepts, to understand the benefits and particularities of the other’s disciplines, and they both firmly believed in their joint project to build new tools for Media Studies researchers. One of the strategies which helped to increase collaboration was meeting more regularly and spending full working days in the same office, which makes it easier to ask questions and consult each other.

7 Reconciliation: crossing the bridge

7.1 Description of the research process

The third and last phase could be regarded as a highlight in transdisciplinarity. The research went smoothly. For this phase, they again built and tested a new interface, but this time more qualitative methods were used for testing it. While MeRDES is really suitable for exploration within a single archive, in the last phase, we wanted to develop an interface for contextual research that links several heterogeneous audiovisual collections. We named the interface CoMeRDa (Contextualizing Media Researchers’ Data) (figure 2), and conducted two user studies to test it: a lab study with 44 students and a longitudinal study with 26 students.
The development of CoMeRDa went much better than MeRDES, although there was a more limited time frame. CoMeRDa enables simultaneous search in six collections: (i) a television program collection (metadatarecords for 700.000 programs, a selection of iMMix); (ii) a photo collection of television programmes; (iii) a wiki dedicated to television programmes and presenters; (iv) scanned television guides; (v) scanned newspapers starting from 1900 to 1995; and (vi) another newspaper collection from 1995 to 2005. The interface has three different tabs: basic search (search in one collection); combined search (search in six collections) and similarity search (search for similar documents across different collections). To test the interface two user studies have been conducted, i.e., one longitudinal and one laboratory.
The researchers conducted a laboratory study with 44 freshmen students of media and cultural studies. The students got three tasks: (i) imagine that you work at the editorial office of a current affairs program and are asked to collect information about celebrity X. Collect at least 5 items deemed to be relevant for this collection, (ii) search for events that were key in the career of celebrity X. Collect at least 5 items for this collection, for example articles and photographs, about these events, (iii) A key television program in the career of the celebrity X was program Y. Collect at least 5 items about the run-up to the program, the program itself, and the aftermath. To complete the first task subjects are provided with the tabbed display (one collection), for the second task with the blended display (all collections), and for the third task with the similarity display (similar documents in other collections).
The theme of the course selected for the longitudinal study is television and film history and the research projects carried out by the students are centred around television personalities between 1950s and 1980s. The 26 master students could use CoMeRDa for their research paper, which had to be a photo essay on a celebrity. Every second week, they were asked to complete a questionnaire on their use of CoMeRDa. Every other week, there was a focus group discussion in which the results of the questionnaire and their feedback on the interface were discussed. The multimethod user study showed that the students preferred the blended display in the first stage of their research project: they wanted to have an overview of what is available. As soon as they had picked a research topic, they preferred to search within one collection at a time. The results of these CoMeRDa user studies are published in [Bron et al 2013].
Figure 2. 

7.2 Conversation about the research process

In CoMeRDa, the development was less difficult, also because it was a more straightforward interface. Negotations were rather about presentation and display, than about facets and metadata fields.
CS: “During the development of CoMeRDa, we had a better collaboration. We could digitize a part of the programme guide collection as sample, and you were the one who knew best what kind of sample makes sense. The same goes for the newspaper collection; it went back to the 17th century, so you had to make decisions on how to streamline the collection with the television related collections.”
Example 18. 
MS: “I knew what to expect and really enjoyed to work with a strict planning. I was more able to do last minute tasks. So for me it was a much easier phase than when we developed and tested MeRDES.”
Example 19. 
Indeed, as the team members all went already through the development and testing of MeRDES, the mutual expectations were levelled. The media studies researcher started not only to see how the innovations done by computer sciences need a structured way of working and a fast pace, but also to adopt some daily working protocols of computer sciences, and to use important computer science terms in the daily conversations. She just knew much better how to communicate.
Similarly, it was much easier to agree on the set-up of the user studies. The researchers agreed that the media studies researcher was the one who should “translate” the surveys and create realistic assignments for the students:
CS: “Well, we had a lot of discussion about the questionnaires. Because you thought they were too long and the questions were not clearly formulated. I always wanted to have extensive questionnaires, while you knew that the students would not really enjoy filling extensive questionnaires. Surveys were always a point of conflict. But once we conducted the study, we were both enthusiastic about the set-up. We had a good task division.”
Example 20. 
The computer scientist decided what they were going to measure, for the computer science papers. And the media studies researcher translated the questionnaires from computer science to humanities language, made the assignments doable, and tailored the task and set-up to media studies research. The computer scientist recalls: “Without you, it would have been a computer scientist who gave an assignment to media studies scholars. And then they would have had no clue about what to do.”
Moreover, it proved to be important to communicate “live” with each other and avoid written discussions.
CS: “We still had some misunderstandings. We learnt that we better avoid to communicate via e-mail. If we started discussing via e-mail, the mutual misunderstandings piled up. Often, we had a meeting, and then you went home and started to think about it and wrote me an e-mail from home: “It won’t work. We have to change the set-up”. And then I wrote back to you: “No, we have to do it like this, we have to measure it!””
Example 21. 
MS: “Yes, we then realized we had to take the phone and talk it through instead of squirreling over mail.”[8]
Example 22. 
For the media studies postdoc, there was one major challenge in the software development. She was afraid that they were building tools that would not be used, as seems to be the case with the majority of tools developed by DH-projects (cf. Golumbia, 2013).
MS: “For me, the most difficult part was that I, or better we, the media researchers, were also research subjects. I continuously had to keep in mind to not make a tool that was completely tailored to what I wanted. If I have to point at the true challenge for BRIDGE, in hindsight, it is the fact that as the humanities researcher of the project you have that double function. You have to be so self-reflexive: on the one hand you have to monitor that you are representing a large, diverse group of researchers, while at the same time you have to give feedback on a daily basis and do the research yourself. It was key that we conducted focus group discussions and so many interviews with different scholars at different institutes in different fields.”
Example 23. 
In the end, it turned out that the tools were appreciated by scholars and students. Of course, a lot of feedback was received and there is still some work to do to make the tools sustainable (see [Martinez-Ortiz et al 2017]). But in general, the research conducted in this phase was considered as satisfying for both scholars in the project. Right after the last focus group discussion with the students in the longitudinal session, all the pieces came together. We realized that we both learnt methods and practices of the other’s field.
CS: “I also liked the fact that we chose for a more qualitative approach, in the longitudinal study. It gave so many insights. I really think that is one of the assets: that I learnt some of your methodologies and learnt about a humanities perspective. It is a fresh perspective for computer sciences.”
Example 24. 
MS: “Same here. I believe that the key is in mixing both worlds: quantitative and qualitative methods. A bit of computer sciences and a bit of humanities, that is. For humanities, it is often - or better: traditionally - considered as “superficial” to base research solely on quantitative measurements. They tend to think: “So what? What do the numbers tell us?”. Humanities scholars need interpretation, culture and meaning. Basically: humans are central in all phases of their research process, as well as in defining their research object. However, bringing the humans into interface development and bringing interpretation into computer sciences is a very exciting new field for the humanities. So, I can not imagine anymore not to think a bit “computer sciency” myself”.
Example 25. 
Both researchers clearly crossed the bridge and felt quite comfortable in mixing paradigms and methods. If the researchers have to pick one challenge, however, these are the publication outlets. It is just very hard to read and understand each other’s papers.
CS: “Humanities papers are very philosophical and theoretical. It is full of quotes of what people think about something cultural. It is very woolly, if you ask me. It questions your own interpretation. It is so vague because you use words that are not the concept itself. Instead, you use all kinds of difficult words, but not the concept itself because that is the one that you are trying to explain. I am not against such papers, I really like them, but we are not trained in reading them. It costs time and effort to understand. You gave me some books about how humanities do research, about cultural studies and cultural memory. And it really helped. We are not trained in interpretation. As a humanities researcher, you make your own interpretation of the data. That is totally new for us.”
Example 26. 
As most papers about the user studies are written within the computer science paradigm, the same is true for the humanities scholar. A true challenge was the writing of papers and articles collaboratively. In the beginning, it was one of the staff members who wrote and the other who commented and provided some paragraphs for the article. In the final year, they wrote an article together, with the collaborative software writeLaTeX.
CS: “It was so fun to see that you enjoyed writing in writeLaTeX.”
Example 27. 
MS: “What I really liked is that you can immediately see your paper in a nice lay-out as if it is already published. You see immediately the result in nicely lay-outed paper, which is much more satisfying than writing in a Word document. It was quite a challenge to learn the code, but it was very efficient.”
Example 28. 
Still, it is not easy to write together in terms of language, because computer sciences and humanities write for different audiences. The different journals and conferences need specific formats and styles. There is still a gap, which can only be bridged by new journals in the field of digital humanities.
Eventually, when the project was finished it was clear that both researchers appreciated and understood important aspects of the other discipline. The computer scientist could finish his PhD on time. The media studies scholar did not write a monograph, nor she could conduct research with the tools, as they were only ready when the project was finished. However, she learnt so much about interface development and computer sciences that she included many of the assets of computer sciences in her future humanities work.
Transdisciplinarity did arise, and the bridge was built. Even more so: after completion of the project, the media studies researcher started to work on a new project and interface, but from then on based at a computer science department. The computer scientist went in the opposite direction and started to work in a joint history-computer science project, based at a humanities department. The once so “other” discipline was from then on “us”. The researchers did not only meet each other in the middle of the bridge; they even crossed the bridge in institutional terms. Admittedly not for long, as both researchers got career opportunities in the “original” discipline, and at the moment of publication work again in their “own” discipline of, respectively, Media Studies and Computer Sciences.

8 Conclusion: living happily together ever after?

In this article, we explicitly chose to describe the challenges that arose within our research project. We viewed those challenges from two perspectives, by using a conversation structure between the two central staff members of the project. The conversation structure enabled us to write an article from two standpoints, and also to show how our standpoints shifted during the course of the project. As such, we embodied two disciplines, and showed how knowledge claims can be negotiated in the practice of a Digital Humanities research project. The researchers of two disciplines gradually gained mutual understanding.
After setting the scene of both disciplines, we identied three phases in the project. Similar to the common narrative of Hollywood films, there was first an encounter, then a struggle and conflict, and finally appeasement and a “happy” ending. Other research on interdisciplinary projects also found that the first phase of research is considered as “frustrating”  [Bartscherer and Coover 2011]. Semantic interoperability, for instance, was a key problem in the first two stages of the project. We showed the frustrations, but also looked at the flipside of them: the semantic gap between the disciplines and the epistemological differences between disciplines often provided inspiration as they were key in lifting the veil of our own situatedness. In other words, we would like to acknowledge frustration as a facilitator of communication and transdisciplinarity.
The interaction and power relations continuously changed during the course of the project as knowledge at all levels increased. The researchers got more insight in the particularities of the other discipline’s field, thus developing a shared language and increasing semantic interoperability between the members of the production team. The computer scientist said that the slow pace of the humanities research was something they had to adjust to and changed their observation of media studies scholars along the way. The humanities scholar learnt to work with a (more) strict planning and also adopted technical language. In the project, it turned out that both disciplines learned the language of the other’s discipline, e.g. by reading literature, by discussing the disciplinary boundaries on a daily basis, by conversations and (ethnographic-style) observations.
Transdisciplinarity especially arose in the set-up of the user studies. It is in the user studies where epistemological encounters took place between the two disciplines. The user studies are, in other words, the space of negotiation where computer sciences and media studies really crossed. In BRIDGE, the computer scientist took care of the experimental set-up of the user studies, while the media studies scholar supervised the “humanities-stake” of the set-up, the assignments and the questionnaires for the humanities users. Moreover, the team conducted focus groups and interviews with prospective users, bringing qualitative, ethnographic methods into the common quantitative practice of computer scientists. It not only helped to learn about the other’s discipline, but also added an interpretative layer to the quantitative results. Especially in the user study with CoMeRDa, in which the team used a laboratory study next to a qualitative study, the merits of a triangulation of methods came to the fore. Vice versa, the media studies researcher gained insight in the development of a tool and learn about the specificities of developing digital tools for media studies.
The most important facilitators of the interface development, too, were the regular contact and interviews with the users, the group of media researchers. During the development of the tool, the users could direct the developers to desirable functionalities and the use of understandable labels. The interfaces MeRDES and CoMeRDa could not have been made without the encounter of the two disciplines, and the continuous negotiation between both. BRIDGE proved not only to build bridges between nodes of a television archive, but also between researchers and disciplines: a true actor-network in a Latourian sense. Extending the network from two researchers who collaborate (i.c. a computer scientist and a media studies scholar) to many scholars was one of the key success factors of the project.
The project required both the computer science researcher and the humanities researcher to learn about the tools, techniques, and methodological practices of the other discipline and find a common research direction in which the strengths of both disciplines could be utilized. This required not only effort in becoming adept at another discipline’s practices but also overcoming a psychological barrier, i.e., abandoning something you are good at to become an apprentice in another field. These, together with the misunderstandings and frustrations that accompany co-operating with someone with a different background, are obstacles which should not be underestimated, especially with regards to the (anxieties about) careers of the researchers involved. Institutions, therefore, play a vital role in the enhancement of Digital Humanities and should carefully look at the assessment procedures for tenure, and specifically the assessment of digital publications such as tools and databases.
Researchers should have some of both worlds in their training, one could argue, or at least being able to move in between the different spaces so as to support an understanding of each other’s discourses. As [Bakhshi et al 2009, 2] point out, this is of vital importance for effective interaction and a condition for innovation, no matter how complex the network of actors for such innovative performance is. [Bartscherer and Coover 2011] state that there is an increasing number of individuals who have real competence in both domains, but that these are rare. What we want to stress is that there is still a lot of scholarship needed on user-centred tool development and on methods for digital tool-education such as the training of Digital Tool Criticism within the humanities.
Also, a problem arose in the physical distance of the project’s members. Scholars are initially distanced in terms of paradigms and distanced in terms of physical presence. E-mail conversations were bound to end up in conflicts. Working in the same space, or at least having regular meetings or conversations, is essential for a good progress of the project. This is simple advice, but highly effective.
A persistent problem, however, are publications. As long as the academic publication outlets are centered in one or the other field, it remains a true challenge to write for a journal of which the format, language and paradigms are not yours. Even with this article it is admittedly the humanities scholar who chose the constructionist theoretical framework and approach. The conversation structure was used as it allowed us to cope with the different paradigms and the different voices of the authors. Still, it was quite a challenge to write this article from two perspectives in terms of words and style.
For a future project, it would be nice to include a third perspective or voice, this of the heritage partner or data provider, and also differentiate views within groups, by e.g. including the perspective of the software developer. In this case, it would be necessary to conduct individuel interviews or focus groups with all participants throughout the project and document these carefully. For long-term projects, the problem might be that participants leave the project or (re)join which makes it difficult to write a joint article and to keep track of all different voices in the article. In any case, we advise DH-projects to document their collaboration and have (at least) annual meta-discussions about each other and the project.
Last, reflection requires at least a moment of distance. It would not only be difficult but even impossible to write this article during the course of the project. By that time, we were pre-occupied with reaching our goals, developing the tools on time, setting up the user studies and writing up the papers. Only now can we look back at BRIDGE and reflect on the persons we were in the beginning of the project. We are aware of the fact that in this article we are open about our own “selves” and attitudes during the project. However, we firmly believe that transparency is not only key to research methods and digital tools but also to scholarship as such. Disciplinary-defined clouds keep on mystifying poles. We have built a bridge, walked on it, but are still attracted to the end of the pole we were educated in. So, the happy end of the project is rather an open ending. Digital Humanities, therefore, is not so much about the results (“how does the movie end?”) but about acknowledging that it is a process of which all elements — the researchers, the interfaces, the methods and the semantics — are interconnected and can view and be viewed from multiple perspectives.

9 Acknowledgements

The authors would like to thank all scholars, software developers and archivists involved in the BRIDGE project. The authors would also like to thank Koen Leurs, Sonja de Leeuw, the three anonymous reviewers and the editorial board of DHQ for feedback on earlier versions of this article.

Notes

[1] The article was written in 2015-16 and submitted in this final version in 2017. Since then, some important new scholarship on collaborations in DH has emerged, a.o. Max Kemman’s PhD dissertation on collaboration in Digital History and his conceptualisation of DH as a trading zone. In addition, we would like to note that the tools in this article are made sustainable in the CLARIAH Media Suite that was launched in April 2019.
[2] This is the case in one of the current projects in which we are involved: the tools are made sustainable in the large-scale Dutch research infrastructure project CLARIAH (2015-18), see clariah.nl for more information.
[3]  However, computer science papers are not (easily) collected, let alone read by humanities scholars. Therefore, this article also shed a “humanities” light on the results of the computer science papers.
[4]  About 12 people have been involved in this project, i.c. in the supervision of the research, the development of the interfaces and the conducting of user studies: professors, archivists and software developers.
[6]  We are now aware this name does not sound attractive in French-language contexts.
[7]  The interface MeRDES was redesigned into AVResearcherXL as part of the follow-up project QuaMeRDES [Van Gorp et al 2015]. In the current project CLARIAH, AVResearcherXL is being rebuilt as a “recipe tool” in the infrastracture Media Suite, and hence made sustainable for a wider user group of humanities scholars. See [Martinez-Ortiz et al 2017].
[8]  They did not experiment with video conferencing. They preferred to work in the same space rather than having online meetings.

Works Cited

Bakhshi et al 2009 Bakhshi H., Schneider Ph. and Walker C. “Arts and Humanities Research in the Innovation System: The UK Example”, Journal of Cultural Science, 2.1 (2009).
Barry et al 2008 Barry, A., Born, G. and Weszkalnys, G. “Logics of Interdisciplinarity”, Economy and Society, 37.1 (2008): 20-49.
Bartscherer and Coover 2011 Bartscherer T. and Coover A. Switching Codes. Thinking through Digital Technology in the Humanities and the Arts. Chicago University Press, Chicago (2011).
Bron et al 2011 Bron M., Van Gorp, J. Nack, F. and de Rijke M. “Exploratory Search in an Audio-Visual Archive: Evaluating a Professional Search Tool for Non-Professional Users.” EuroHCIR 2011: 1st European Workshop on Human-Computer Interaction and Information Retrieval, Newcastle (2011).
Bron et al 2012a Bron M., Van Gorp, J. Nack, F. and de Rijke M. “Ingredients for a User Interface to Support Media Studies Researchers in Data Collection.” EuroHCIR 2012: 2nd European Workshop on Human-Computer Interaction and Information Retrieval, Newcastle (2012).
Bron et al 2012b Bron M., Van Gorp J., Nack F., de Rijke M., Vishneuski A. & de Leeuw, J.S. “A Subjunctive Exploratory Search Interface to Support Media Studies Researchers.” SIGIR 2012: 35th international ACM SIGIR conference on research and development in information retrieval, Portland: ACM (2012).
Bron et al 2013 Bron M., Van Gorp J., Nack F., Baltusen L.B & de Rijke M. “Aggregated search interfaces in multi-session tasks.” SIGIR 2013: 36th international ACM SIGIR conference on research and development in information retrieval, Dublin: ACM (2013).
Bron et al 2016 Bron M., Van Gorp, J. and de Rijke M. “Media studies research in the data-driven age - How research questions evolve.” Journal of the American Association for Science and Technology, 67.7 (2016).
Couldry 2005 Couldry, N. “Transvaluing media studies: or, beyond the myth of the mediated centre” In J. Curran and D. Morley (eds.), Media and Cultural Theory. Routledge, Abington (2005).
Couldry 2008 Couldry, N. “Actor network theory and media: do they connect and on what terms?” In A. Hepp, F. Krotz, S. Moores and W. Carsten (eds), Connectivity, Networks and Flows: Conceptualizing Contemporary Communications. Hampton Press, Inc., Cresskill, NJ (2008), pp. 93-110.
Creswell 2014 Creswell, J.W. Research Design Qualitative, Quantitative, and Mixed Methods Approaches Fourth Edition. SAGE, London (2014).
Flanders 2012 Flanders, J. “Collaboration and Dissent: Challenges of Collaborative Standards for Digital Humanities.” In M. Deegan and W. McCarty (eds), Collaborative Research in Digital Humanities, Ashgate Publishing, Surrey (2012), pp. 67-80.
Giddens 1991 Giddens, A. Modernity and Self-Identity: Self and Society in the Late Modern Age. Polity Press, Cambridge (1991).
Golumbia 2013 Golumbia, D. Building and (Not) Using Tools in Digital Humanities. Blogpost on uncomputing.org, February 5 (2013).
Hall 1992 Hall, S. “The question of cultural identity”. In T. McGrew, S. Hall and D. Held (eds), Modernity and its Futures. Polity Press and Open University, Cambridge (1992), pp. 273–325.
Haraway 1988 Haraway, D. “Situated Knowledges: The Science Question in Feminism and the Privilege of Partial Perspective.” Feminist Studies, 14. 3 (1988): 575-99.
Haraway 1991 Haraway, D. Dimians, cyborgs, and women: the reinvention of nature. Routledge, New York (1991).
Harding 1987 Harding, S. “Introduction: Is there a feminist method?” In S. Harding (ed), Feminism and methodology. University of Indiana Press, Bloomington (1987), pp. 1-14.
Hartley 2009 Hartley, J. “From Cultural Studies to Cultural Science”, Cultural Science, 2.1. Available at: www.cultural-science.org/journal.
Hessels and van Lente 2008 Hessels, L.K. and van Lente, H., “Re-thinking New Knowledge Production: A Literature Review and a Research Agenda”, Research Policy 37 (2008): 740–760.
Latour 1987 Latour, B. Science in Action: How to Follow Scientists and Engineers Through Societies. Harvard University Press, Cambridge MA (1987).
Latour 2005 Latour, B. Reassembling the social. An introduction to actornetwork theory. Oxford University Press, Oxford and New York (2005).
Lin 2012 Lin, Y. “Trans-disciplinarity and Digital Humanity: Lessons Learned from Developing Text Mining Tools for Textual Analysis.” In D. M. Berry (ed), Understanding Digital Humanities (e-Book version). Palgrave MacMillan, UK (2012).
Manning et al 2009 Manning, C.D., Raghavan, P. and Schütze, H. An introduction to information retrieval. Cambridge University Press, Cambridge (2009), p. 1.
Martinez-Ortiz et al 2017 Martinez-Ortiz C., Ordelman R., Koolen M., Noordegraaf J., Melgar L., Aroyo L., Blom J., de Boer V., Melder W., Van Gorp J., Baaren E., Beelen K., Karrouche N., Inel O., Kiewik R., Karavellas T. and Poell T. “From Tools to ‘Recipes’: Building a Media Suite within the Dutch Digital Humanities Infrastructure CLARIAH”. Abstract for paper presented at DHBenelux Conference, 3-5 July (2017)
McCarty 2005 McCarty, W. Humanities Computing. Palgrave MacMillan, UK (2005).
Menken and Keestra 2016 Menken, S. and Keestra, M. An Introduction to Interdisciplinary Research. Theory and Practice. Amsterdam University Press, Amsterdam.
Naples and Gurr 2013 Naples, N.A. and Gurr, B. “Feminisn Empiricism and Standpoint Theory. Approaches to Understanding the Social World.” In S. Nagy Hesse-Biber (ed), Feminist Research Practice: a Primer. SAGE, London (2013), pp. 14-41.
Roseneld 1992 Roseneld, P.L. “The Potential of Transdisciplinary Research for Sustaining and Extending Linkages between the Health and Social Sciences”, Social Science Med. 35.11 (1992): 1343–1357.
Van Gorp 2013 Van Gorp J. “Looking for what you are looking for: a media researcher's first search in a television archive.” “VIEW: Journal of European Television History and Culture”, 1.3 (2013).
Van Gorp et al 2015 Van Gorp J., de Leeuw, S., van Wees J, and Huurnink B. “Digital Media Archaeology: Digging into the Digital Tool AVResearcherXL.” VIEW: Journal of European Television History and Culture, 4.7 (2015).
2019 13.3  |  XMLPDFPrint