Digital Media and the Analysis of Film
This history of film studies is a short one, dating from the early to mid-1960s, and evolving from a number of historical events. One was the appearance of the films of the French New Wave – Jean-Luc Godard, Francois Truffaut, Claude Chabrol, Eric Rohmer, as well as older figures, such as Alain Resnais. These directors, along with Sweden's Ingmar Bergman and Italy's Federico Fellini and Michelangelo Antonioni, among others, demonstrated that the formal, thematic, and economic givens of American Cinema were not that at all. Film had a flexible language that could be explored, opened, rethought. They proved that the conventions of Hollywood story telling could be pulled and stretched, stood on their head. These films, in short, made many people aware that cinema was a serious form of expression, an art form involved in the production of thought.
The wonderful paradox here was that many of these European directors learned from watching American film. Not knowing English, they read the visual structure of the films they saw and discovered a visual energy mostly missed by plot-centric American filmgoers and film reviewers. They embraced the American style and countered it simultaneously, all the while refusing the pressures of a studio system that, in America, saw films as commodities. And, as the writing and the filmmaking of the French New Wave reached Britain and then the USA, there was an interesting flow-back effect, turning the eyes of burgeoning American film scholars toward their own cinema, largely reviled, mostly forgotten.
In their discovery of American film, the French – concentrating on the visual aspects of what they saw, and understanding that film was essentially about the construction of images – noticed that when the structures of formal and thematic element cohered, it was around the film's director – no matter what other personnel were involved, including the screenwriter. As these insights were passed overseas, American film suddenly took on a patina of seriousness that only authorship could give it. This, coupled with the excitement of the new cinema coming from overseas, and the call by students for a broader college curriculum, led to film courses being offered by English professors (like myself), and in history and art history departments. Film Studies departments came together more slowly. Publishing scholarly film articles and books grew apace.
What was taught and studied in the early days? The auteur theory, in which the film director is seen as the formative creative consciousness of a film, turned out to be in practice not mere idolatry, but a means of analysis. If one could identify a filmmaker by certain stylistic and thematic traits, these could to be understood and analyzed. Or, taking a Foucauldian turn, the auteur could be constructed from a group of films, discovered as auteurs through their work, the analysis of which yields ways of cinematic seeing that were recognizable from film to film.
By the late 1960s and into the 1970s and 1980s, the fundamentals of auteuristn were enriched by a number of other theoretical and historical practices. In fact, film studies was among the earliest disciplines to apply feminist and gender theory. Laura Mulvey's theory of the formal structures of the gendered gaze in her essay "Visual Pleasure and the Narrative Cinema", published in 1975, remains a touchstone not only for film studies, but for art and literary analysis as well. Ideological criticism and cultural analysis, Lacanian psychoanalytic theory, structuralism, postmodern critique – indeed a variety of theoretical permutations – have built film (and, finally, television) studies into a rich and productive discipline.
Throughout this period of growth (if not expansion), one problem remained in both the teaching and writing of film: the ability to prove and demonstrate by means of quotation. In other words, the literary scholar, the historian, the art historian can literally bring the work under study into her own text to prove a point, illustrate an argument, and provide the text and context for analysis. The film scholar cannot.
In teaching, the ability to analyze sequences in front of a class was better served, going through a long phase of never quite adequate development. In the beginning, there were only 16 mm copies of films to show and analyze. The equivalent of Xerox copies, 16 mm film was terrible for general viewing, and selecting a sequence to study involved putting pieces of paper in the film reel as it ran and then winding it back to find the passage. One could also put the reel on manual rewinds and use a small viewer to find the passage, and then remount the whole thing back on the projector. At one point, the department I was working in purchased an "analyzer" projector – the kind that was used by football teams before videotape. One still needed to find the passage in the reel, but, theoretically, one could freeze frames, roll forward or backward, or even step-frame through the sequence. In reality, the machine usually spat out sprocket holes and tore up the film. Videotape was no better in image quality – worse, in fact, because the manufacturers of video believed audiences would not put up with the black bars on the top and bottom of the frame for wide-screen and anamorphic films, and so blew up the image to make it fit the familiar, almost square rectangle, thereby losing about two-thirds of it in a process nicely named "pan and scan."
Laserdisk and now DVD came close to ameliorating the situation. The image resolution is greatly improved. Good video projection creates a sharp, color correct or accurate greyscale image in the proper screen ratio. Accessibility to the precise place in the film to which we want to aim our students' attention, or have them find the sequence they wish to discuss, is easy. We can now "quote" the passages of the film when teaching them.
But this did not solve the quotation problem for research. The presentation and analysis of cinematic works, no matter what methodology was used, could not be given the proof of the thing itself, the moving image upon which the analysis was based. We could only describe what we saw as a means of bolstering and driving our arguments, and, for visuals (often on the insistence of publishers), provide studio stills, which are, more often than not, publicity stills rather than frame enlargements from the film itself. To be sure, there were some painstaking researchers who, with special lenses and the permission of archives, made (and still make) their own frame enlargements directly from a 35 mm print in order to get exactly the image they need. But the images were still, and we were all writing and talking about the images in motion.
The situation began to change in the early 1990s. The first indication of a new way to do film studies occurred at a meeting of the Society for Cinema Studies in 1989, when Stephen Mamber of UCLA hooked up a computer, a laserdisk player, and a television monitor and controlled the images of the film on the laserdisk through the computer with a program he had written. This was followed the next year by a colloquium that Mamber and Bob Rosen set up at UCLA, funded by a MacArthur Grant, that advanced Mamber's work considerably. He and Rosen had created a database of every shot in Citizen Kane, Welles's Macbeth, and a film from China, Girl from Hunan (1986). They could access the shots from the computer, through a multimedia program called Toolbook – an excellent multimedia package that still exists and is among the best to do the kind of work I'm describing.
The coup came shortly after this, when a colleague at the University of Maryland demonstrated image overlay hardware that put a moving image from a tape or disk directly into a window on the computer screen. Quickly following this came inexpensive video capture hardware and software that allows the capture of small or long sequences from a videotape or DVD. The captured and compressed images (the initially captured image creates a huge file which must be compressed with software called "codecs" in order to make them usable) are like any computer file. They can be manipulated, edited, titled, animated. The film scholar and burgeoning computer user finally discovered that a fundamental problem faced by the discipline of film studies could now be solved. The visual text that eluded us in our scholarly work and even our teaching was now at hand. We could tell, explain, theorize, and demonstrate!
But other problems immediately surfaced. One was obvious: how would this work be transmitted? On first thought, the web seemed the best and obvious means of distribution for this new kind of film analysis. It wasn't then; it isn't now. To analyze the moving image, it has to appear promptly, smoothly, with as near perfect resolution as possible, and at a size no smaller than 320 × 240 pixels or larger (to make things confusing, these numbers refer to image resolution, but translate on the screen as image size). This is impossible on the Web. Even today's streaming video and high-powered computers cannot offer the image size, resolution, and smoothness of playback that are required, despite the fact that a few successful Web projects have emerged, mostly using very small images to minimize download time. Until Internet 2 becomes widely available, CD and DVD are the only suitable media for transmission of moving images. (Recordable DVD is still in its standardizing phase, with a number of formats competing with each other.) There is something on the horizon that will allow the Web to control a DVD in the user's machine, and we will get to that in a bit.
The other problem involves the programming necessary to create a project that combines text and moving image. There are relatively simple solutions: moving image clips can be easily embedded into a Power Point presentation, for example. A Word document will also accept embedded moving images – although, of course, such an essay would have to be read on a computer. HTML can be used relatively easily to build an offline Web project which will permit the size and speed of moving images necessary. But more elaborate presentations that will figuratively or literally open up the image and allow the user to interact with it, that will break up the image into analyzable chunks, or permit operations on the part of the reader to, for example, re-edit a sequence – all require some programming skills. This is probably not the place to address the pains and pleasures of programming. There is no getting around the fact that one will need to know some basics of programming (and the varieties of online users' groups, who will answer questions); but ultimately, programming is only half the problem. The other, perhaps most important, is designing the project, creating an interface, making the screen inviting, as easy or as complex to address as the creator of it wishes it to be. And, preceding the interface, there must lie a well-thought out concept of what we want a user to do, to discover, to learn. In other words, in addition to the analytic and theoretical skills of a film scholar, there are added design and usability skills as well.
There are a variety of tools available to execute design and usability, each one with its particular strengths and weaknesses. Macromedia Director is good for animations to accompany the exposition, but it requires a great deal of programming in a non-intuitive environment. Toolbook is an excellent, Windows-only program. Its scripting language is fairly simple (or fairly complex, depending upon your need) and much of it in plain English. Visual Basic is a powerful tool, requiring a great deal of programming knowledge. The solution for all of this, of course, is for the film scholar to work closely with a student with programming skills, so that the scholar becomes essentially a concept and content provider. On the other hand, it is very satisfying to learn the necessary program or multimedia package. More than satisfying, by working out the programming, one learns how to structure and form ideas by and for the computer – in effect understanding the grammar that will express what you want the viewer to see. By understanding the form and structure of computational possibilities, you can generate the design, interactivity, the images and analyses that integrate concept, execution, and reception.
Many scholars have experimented with various modes of computer representations since the 1990s. I've indicated that the work of Stephen Mamber was responsible for getting me and others started in using the computer to analyze films. Mamber went on to do what still remains the most exciting and complex computer-driven work of cinematic analysis, his "Digital Hitchcock" project on Hitchcock's The Birds. The project started in part when the Academy of Motion Picture Arts and Sciences allowed him access to the script, storyboards, and other material related to the film. Programming from scratch, Mamber created a stunning presentation: beginning with the representation of the first frame of every shot of the film, all of which he managed to put on one screen. In other words, the whole film is represented by a still of each of its shots and each still is addressable. When clicked, they bring up the entire shot.
Mamber compares Hitchcock's storyboard illustrations side by side with moving clips from the finished screen, to demonstrate how closely Hitchcock hewed to his original conceptions. (Hitchcock was fond of saying that, for him, the most boring part of making a film was actually making the film, because he had completed it before it went to the studio floor.) Mamber's comparison of sketches with shots proves Hitchcock right and wrong, because it indicates where he deviates from the sketches to achieve his greatest effects. The program allows the clip to be played while the successive storyboard sketches appear next to the sequence.
In this project, Mamber began what has become some of his most exciting work, the creation of 3-D mock-ups of filmic spaces. Working on the theory of "imaginary spaces", Mamber shows how the images we look at in a film could not possibly exist. They are almost the cinematic equivalents of trompe I'oeil, using the two-dimensional surface of the screen to represent a fictional space that the filmmaker deems necessary for us to see, without regard to the impossible spaces he or she is creating. By making the entire space visible by imagining a three-dimensional simulacrum of it (in effect creating a simulacrum of a simulacrum), Mamber exposes not only the fictional space, but the processes of representation itself. His spatial representations provide an analytical understanding of the continuity of movement through space that filmmakers are so keen on maintaining, despite the optical trickery used to create them. His work becomes, in effect, an exposure of the ideology of the visible. He creates an alternative world of the alternative world that the film itself creates.
Mamber clarifies a film sequence through 3-D rendering, showing how it is constructed for our perception. He has made, for example, a painstaking animated reconstruction of the opening of Max Ophiils's The Earrings of Madame De…, showing the intricate movements of the camera – by putting the camera in the animation – during an amazingly long shot. He has done a still rendering of the racetrack and a flythrough of the betting hall in Kubrick's The Killing in order to show the spatial analogues of the complex time scheme of the film. The line of his investigation is of enormous potential for all narrative art because he is essentially discovering ways of visualizing narrative. Film is, obviously, the perfect place to start, because its narrative is visual and depends upon spatial relationships, as well as the temporal additions of editing. But literary narrative also depends upon the building of imaginary spaces, and the kind of visualizations Mamber is doing on film could go far toward a mapping of the spaces of the story we are asked by fiction to imagine.1
Other pioneering work in the field of computer-driven film studies includes Marsha Kinder's 1994 companion CD-ROM to her book on Spanish film, Blood Cinema. Here, film clips and narration elaborate the elements of Spanish cinema since the 1950s. Lauren Rabinowitz's 1995 CD-ROM, The Rebecca Project, is a wonderful example of how CD-ROMs are able to combine images, clips, critical essays, and other documents to drill as deeply and spread as widely information and analysis of Hitchcock's first American film. Robert Kapsis's Multimedia Hitchcock, displayed at MOMA, also brings together a variety of information, images, clips, and commentary on Hitchcock's work. Similarly, Georgia Tech's Griffith in Context (Ellen Strain, Greg Van Hoosier-Carey, and Patrick Ledwell), supported by a National Endowment for the Humanities (NEH) grant, takes representative sequences from Birth of a Nation and makes them available to the user in a variety of ways. The user can view the clips, attempt a re-editing of them, listen to a variety of commentary from film scholars, view documents, and learn about the racial and cultural context surrounding Griffith's work. MIT's Virtual Screening Room (Henry Jenkins, Ben Singer, Ellen Draper, and Janet Murray), also the recipient of an NEH grant, is a huge database of moving images, designed to illustrate various elements of film construction. Adrian Miles's "Singin' in the Rain: A Hypertextual Reading", appearing in the January, 1998, film issue of Postmodern Culture (at <http://muse.jhu.edu/login?uri=/journals/postmodern_culture/v008/8.2miles.html>, is one of the most ambitious projects involving image and critical analysis, and one of the few successful web-based projects with moving images. Miles uses small and relatively fast-loading Quicklimes with an intricate hyper-textual analysis – hypervisual might be more accurate – that not only explores the Kelly-Donen film, but experiments with the possibilities of a film criticism that follows non-linear, reader-driven paths in which the images and the text simultaneously elucidate and make themselves more complex. This is textual and visual criticism turned into Roland Barthes's writerly text.
My own work has followed a number of paths. Early attempts emerged out of an essay I wrote on Martin Scorsese's Cape Fear (1991). Cape Fear is not great Scorsese, but, watching it, I was struck by something very curious I could not quite put my finger on: I could see other films behind it, in it, lurking like ghosts. None of these ghosts were the original 1961 Cape Fear, an early Psycho imitation. But it was Hitchcock I was seeing, especially early 1950s Hitchcock, before he had got into his stride; I was seeing Stagefright (1950), Strangers on a Train (1951), and / Confess (1953). The proof of my intuition appeared as soon as I looked closely at these three films. And what I had intuited were not ghosts, but a kind of palimpsest, images and scenes that lay under Scorsese's film. He was quoting – indeed recreating scenes – from these earlier films. Writing the essay analyzing these quotes didn't seem sufficient: I wanted a way to show them.
I built a very simple Toolbook program, using a laserdisk and an overlay window, which merely had a screen of buttons naming the various scenes from Strangers on a Train and Cape Fear which, when pressed, and with the proper side of the appropriate disk in the player, displayed the images on the computer screen. The interface was plain, but the first step had been taken. The laserdisk/computer interface was extremely confining. There needed to be a way to capture the images and have them on the computer hard disk, quickly available and easy to insert into a program. Reasonably priced image capture boards that allowed easy digitizing and compression of short-duration clips, even on the then low-powered desk-top PCs, were already being developed in the early 1990s. This was the obvious answer to the problem: making image files as accessible and manipulable as any other digital artifact. Here was hardware and software that turned the moving image into binary code, and once so encoded, almost anything could be done with it.
With the encouragement and assistance of John Unsworth at the Institute of Advanced Technology in the Humanities, I wrote I kind of manifesto, "The Moving Image Reclaimed", for a 1994 issue of lATH's online journal Postmodern Culture (http://muse.jhu.edu/login?uri=/journals/postmodern_culture/v005/5.1kolker.html). The essay included various moving image files, including the major comparisons of Cape Fear and Strangers on a Train. It also included an idea of how the computer could be used literally to enter the image and diagram it to show how it worked. I took part of a very long shot in Citizen Kane, where Mrs Kane sells her son to Mr Thatcher for a deed to a copper mine – a sequence in which the movement of the camera and the shifting positions of the characters tell more interesting things about the story than the dialogue does – and animated it. That is, I plucked each frame from the sequence, and, overlaying the frame with various colored lines, put together a variation of a rotoscope (an animation technique where live action is traced over and turned into a cartoon). The result was a visualizing – a map – of how eyeline matches (the way a film allows us to understand who is looking at whom, and why) and the changing positions of the various characters were staged and composed in order to represent the spaces of Oedipal conflict.
Unsworth and I decided that MPEG would be the best format for the purpose (this was some time before the development of streaming video), though we could not transmit the sound. The essay and images were put online. The images were large, and they took a great deal of time to download, but the project successfully proved the ability to solve the quotation problem.
While it was a worthwhile experiment, it confirmed that the Web was an imperfect transmission vehicle for moving images. The next project was the creation of an interactive program that would provide a visual version of an introduction to the basic elements of film study. Prototyping in Toolbook, I created a series of modules on editing, point-of-view, mise-en-scene, lighting, camera movement, and other issues that seemed to me most amenable to using digitized clips. This was not to be a program on how to make a film, but, to borrow James Monaco's term (who has himself recently published a DVD full of text and moving images), how to read a film. The program uses text and moving image, both often broken up into successive segments to show how a sequence is developed, and always allowing the user to look at the entire clip. It included interactivity that allowed the user to, for example, put together a montage cell from Eisenstein's Potemkin or see the results of classical Hollywood three-point lighting by "turning on" the key, fill, and backlighting on a figure, or step through a sequence in Vertigo to show how camera movement and framing tell a story different from the one a character in the film is telling. It contained a glossary, so that by clicking on any hotword in the text, one would jump to a definition of the term.
The point-of-view (POV) module was among the most challenging. POV is a difficult concept to analyze even in the traditional modes of film theory (though Edward Branigan and others have made important contributions to our understanding). How to show how point-of-view works toward narrative ends – how the film guides our gaze and provides a "voice" for the narrative – required careful choices and execution. I went to a film that was about point-of-view, Hitchcock's Rear Window (1954). (It is, incidentally, no accident that so many film/computer projects focus on Hitchcock, one of the most complex formalists of American film.) By combining stills and moving images, winding up finally with a 3-D flythrough that indicates the perceptual spaces of the film and the way they turn at a crucial moment, the user is able to gain a visual grip on the process and understand how filmmakers make us see the way they want us to see.
The creation and publication of the project, called Film, Form, and Culture, offers a useful example of other problems and successes that arise once you have decided to go digital. For one thing, most publishers are reluctant to publish CDs or DVDs alone. They are, after all, book publishers, and they still want paper. While continuing to prototype the CD-ROM project, I wrote an introductory textbook, which not only introduces terms and concepts basic to film studies, but also examines the history of film in the contexts of the cultures that produced them. The book was not written as a supplement, but as a stand-alone companion to the CD. Elements in the text that are covered on the CD are cited at the end of each chapter. And the book itself is illustrated by digital stills – that is, stills grabbed by the computer directly from the DVD or VHS of a film, a process which allows creating a sequence of shots which, while still, convey some of the movement between them, as well as indicating ways in which editing connects various shots.
The publisher of the work, McGraw-Hill, bid out for a professional CD-authoring company to do the distribution CD. An important lesson was learned from this. The film scholar, used to dealing politely with editors, who, more often than not, had the right ideas on how to improve the language, organization, and accuracy of a manuscript, is here confronted with the age-old problem of "creator" (or, in the discourse of CD or Web authoring, the "content provider") vs. the producer, who was working within a budget provided by the publisher. The CD producer converted my prototype into Macromedia Director, in order to create a cross-platform product. We worked closely to choose a suitable interface, but some small "creative differences" ensued. The producer wanted a uniform interface, so that each screen would look the same. This is common multimedia/web-design practice, though it can limit flexibility, especially when one wants to break the limits of the screen frame, or present alternative ways of laying out the material. There was an urge to cut down on interactivity and slightly diminish an immersive experience for the user. They insisted on small anomalies, such as not allowing the cursor to change into a hand icon over a link.
These were small, limiting constraints, and in some cases, convincing was needed on my part, to maintain my original conceptions, especially when these involved pedagogical necessities. But like all such give and take, the constraints were turned to advantages, and, to the producers' great credit, the last module, on music, turned out to be one of the best on the CD, and for that module, I provided only concept, assets (images and sounds), and text; they developed the appearance of the section based upon some preliminary prototyping on my part – which used Sergei Eisenstein's graphic visualization of how Prokofiev's score and Eisenstein's images worked in perfect abstract relationship to each other in Alexander Nevsky (1938). In the second edition of the CD, many problems were solved. The cursor changes to a hand over a link. The new material on film noir and on sound are well executed. The interface remains, and even more interactivity was and can still be added. As with all digital projects, it is a work in progress, always undergoing improvement with each new edition and the help of very supportive publishers.
But one issue, much more immediately pressing than dealing with producers, hangs like a cold hand over the project of using digital media in the study of film or any other discipline. This is the issue of copyright and intellectual property law (IP). Simply put, IP is the legal issue of who owns what, who can use it, and where it can be used. But the issue is not simple. The US government is trying to make copyright laws more and more inflexible and has made Fair Use (the legally permitted use of small portions of a copyrighted work for educational purposes) more difficult to apply, beginning with the 1990 Digital Millennium Copyright Act. But many of the new laws are undergoing a challenge. And, with the major exception of Napster, there have been few major test cases to indicate how the various parties – content owner and content user – would fare in court. No one wants to go first!
It might be helpful to briefly outline the major copyright laws and then go into some more detail on the major problems in gaining licenses, rights, and permissions for digital media, with the full knowledge that such a complex issue can never be completely elucidated in a short space.
• The overwhelming majority of feature films are under copyright.
• Copyright has (in its original form) limits:
– Published before 1923, the work is in the Public Domain (PD).
– Works created after January 1, 1978, are copyrighted for the life of the author (in film, often considered the Studio) plus 70 years.
– Works published from 1923 to 1963: copyright holds for 28 years, renewable up to 67 years. There are other variations, but these are the most pertinent.
– The Sonny Bono Copyright Term Extension Act extends copyright on many works for as much as 95 years, and gives foreign works this extension even if the works are not formally copyrighted. Works published in or before 1922 remain in the Public Domain. Bono is under litigation in the Supreme Court. The restrictive 1990 Digital Millennium Copyright Act is also undergoing challenges.
– Legislation has been passed for the "Technology, Education and Copyright Harmonization Act" (TEACH), that aims to modify copyright restrictions for Distance Education.
– Films in the Public Domain may have "subsidiary rights." That is, while the film may have come in to PD, parts of it, the score, for example, may have been separately copyrighted. Carol Reed's The Third Man is an example.
– Licensing clips for a CD – even an educational one – takes a strong stomach, a very busy fax machine, and often a lot of money.
Copyright issues are, perhaps even more than developing one's work, the greatest hindrance to scholars making use of digital interventions in film studies. However, they are not insuperable. Copyright concerns and the energy required for licensing clips should in fact stop no one, but only cause one to act cautiously and judiciously, especially if the work is aimed at publication. The Library of Congress (the repository for all copyrighted material) has, for example, a three-volume listing of films in the Public Domain. The last volume has an appendix that, as of the early 1990s, lists films for which extensive research on subsidiary rights was made. Many of these films are available on magnetic or digital media. This is the best first source for what is freely available – especially if the Bono Act is overthrown.
After the research for PD titles, the very best way to start the rights and permissions process, or at least get an entry to the studios who own copyrighted material, is to contact the film's director – assuming he or she is alive. For Film, Form, and Culture, one filmmaker gave us complete use of one of his films, and signed a clearance for it. Another, Oliver Stone, a great filmmaker and an admirer of film studies, wrote to each of his distributors asking them to help me out. Sometimes even this will not prevail. Disney, whose Hollywood Films released Nixon, appreciated the personal note, but announced their policy of never giving permissions for CDs. However, Warner Bros did license thirty seconds of JFK, and the same length of clip from Citizen Kane, for $3,000 apiece.
For the second edition, I added a module on film noir. I was most interested in using images from one of the greatest of the late-1940s noir directors, Anthony Mann. His dark, brutal, misanthropic films are currently owned by the famous publisher of children's literature, Golden Books. The ways of copyright are strange, indeed. Golden Books charged $1,000 for sixty seconds. In the great scheme of licensing clips, this is not a lot of money, and publishers who understand the value of having a CD with a book will pay such a relatively modest fee.
For other clips, the Hitchcock Estate, who, when I started the project, still controlled Rear Window and Vertigo, worked with Universal pictures, who – unlike the Estate, but at their urging – reluctantly allowed me to use clips from those films. My understanding is that Universal and other studios have since streamlined both their rights process and their attitude. Everything else used was Public Domain.
Certainly, none of this was easy (though now it may be at least easier, given such companies as RightsLine and RightslQ, that can help track who holds various rights to a film). In every successful instance of dealing with a studio, the result will be a long and intimidating contract, which must be read carefully. The result, however, is worth the labor. The work of copyright searches and acquisition of licenses allowed Film, Form, and Culture, for example, to present students with access to clips from a large variety of films and uses them in some detail to show the precision with which they are made and a variety of ways in which they can be read and analyzed.
I would only repeat that IP concerns should not be the initial obstacle to using moving images in a pedagogical or scholarly work. There are many ways to get what you need, if not always what you want. And Fair Use should still prevail for in-class pedagogical use, for digital stills in books, and, since the TEACH Act has been passed, for wider educational distribution. The studios have become somewhat more accommodating, a response that may have to do with DVDs and the unexpected popularity of their "supplementary material." DVDs also offer something of a new frontier in the development of digitized media in film studies and, hopefully, a relief for copyright concerns.
DVDs have proven a boon to film scholars and film viewers alike. They have compensated for the culture's massive error in visual judgment when it chose VHS over Beta (which had a higher image resolution) in the 1980s. The success of DVD has proven that image quality does count. They have been of great help to film studies: they are cheap for an academic department to buy; they are readily available for students to rent and view at home. Their "supplementary material" (directors' and actors' commentaries, demonstrations on how scenes are shot, and how computer graphics are integrated into a film) have proven enormously popular, even though they are mostly illustrative and anecdotal. This success, as I noted, came as something of a surprise to the studios, and the outcome is that they may be taking notice of the fact that viewers may, first of all, be interested in older titles, and, second, do want to know more about the film they are watching. In fact, studios may now be admitting that a film might have educational value! Herein lies some hope on the licensing issue. The studios (and this is pure fantasy at the moment) may some day, when a scholar tells them that use of their material will help further sales of their films by educating new viewers, and without "copying" clips, begin to accept the claim and therefore restrain themselves from deriding the person who makes such an argument.
Use of DVDs for analysis is being made possible by some new technology that opens interesting possibilities of using commercially available DVDs, controlling them as one would a digitized file on the computer, and even creating a set of analytic tools available on a CD or on a website to address the DVD in the individual's own computer. Some of this work has received funding from the National Endowment for the Humanities.
Again, programming issues are at stake here, and various choices have to be made, depending upon whether the creator of a DVD-based project wants to do a standalone (that is, a project that exists only on a computer or CD-ROM, along with the DVD) or a web-based project that contains the controls, a database, and an interface online, which address the DVD in the user's computer. Indeed, unlike digitized clips, addressing the DVD from the Web does not result in image degradation, because the image remains on the user's computer.
Again, the coding issues can be overcome in various ways. But there is yet another issue involved. While the use of a commercial DVD in no way bypasses copyright issues, it may (and I must emphasize that I, like anyone else, can only speculate) make licensing easier to obtain or cause even less of a worry for Fair Use in a classroom or other educational settings. After all, one is not digitizing copyrighted material, but using something that is already commercially available. The content provider, however, may still see this as using their material, in ways other than for home viewing.
It may, finally, seem as if we have come full circle to the time when a computer controlled a laserdisk. There are important differences. DVDs are digital; laserdisks were analogue. Laserdisks had to be played either on a monitor, or on an overlay window on the computer screen through special hardware. DVDs play directly through the computer's graphics display card – as long as there are the proper drivers (which come automatically with any computer that has a DVD drive). All this makes control of DVDs easier and more flexible than laserdisk. In a relatively simple Visual Basic program, one can even capture a still from the DVD directly to disk. I have been able to program a DVD so that, with user input, the film is paused, the exact time of the pause is passed to the computer, which then uses that time to call up other information I have prepared in a database that is relevant to that moment or shot in the film. This can be combined with other images and analysis that further explains the shot or the sequence it is part of- and finally the film as a whole. In other words, this combination of database and image, along with the interactivity that gives the user choice over what part of the film he wishes to examine, provides an invaluable method to thoroughly analyze a single work.
The database, created and connected to the DVD, contains information for every shot in the film – from dialogue, discussion of the narrative moment, mise-en-scène, editing patterns, and so on. A database is essentially a static set of cubicles, each containing various kinds of data, which can be organized and then accessed in a variety of ways. We use them continuously: whenever you go online to use the campus library, to shop, or to buy an airline ticket, you are manipulating a database. Terabytes of personal information (more and more being added each day) are stored by government agencies in databases. It is not a stretch to say that the computerized database is the one completely structured item in an otherwise chaotic world – though this is no guarantee of the veracity of the data it contains. It must be ordered correctly and filled with well-thought out data, and at the same time accessible, and manipulable, containing data in the form of text, numbers, sounds, or images that can be drawn upon in a number of different ways. Using a simple scripted language called SQL (Standard Query Language, better known as "Sequel"), the database becomes a malleable, flexible, responsive thing, alive to query and incredibly responsive if you ask the right questions of it. Database tables are themselves combinable, open to cross-queries, with the ability to pull together a large array of information.
Lev Manovich (2001: 226) theorizes that the database is essential to computer aesthetics, part or, perhaps more appropriately, the genesis of the various narratives – including analytic and theoretical narratives – we can create by computer. "Creating a work in new media", he writes, "can be understood as the construction of an interface to a database", the means of accessing, combining, temporalizing, and spatializing static bits of data.
Allow me another example from my own work: before DVDs were available, I had started work analyzing Robert Altman's 1993 film Short Cuts, using a five-minute digitized clip. My initial interests were to create a search engine so that various key terms (say, "zoom", a camera lens movement that Altman turns into an aesthetic and narrative statement) would yield the corresponding shots in which the zoom was used. What I would like to do now, when the DVD becomes available, is to create a critical narrative of the entire film, especially concentrating on its complex narrative structure and the way that complexity is made perfectly comprehensible to the viewer through Altman's editing, his use of color, the matching of character movement, and, throughout, an abstract, intuited pattern of movement across the narrative fields of the film, which, indeed, can itself be thought of as a sort of database of narrative events.
The creation of a database for a three-hour film is no easy task. There are a number of decisions to be made in order to make it become dynamic, and all of these must be done by the imagination and work of the film scholar. There is little automation here, but an extraordinary opportunity to learn about the film. One must choose the various elements to describe and analyze the shot – for example, narrative elements, mise-en-scene, editing, dialogue, etc. And, indispensably, though not very creatively, the "in" and "out" points for each shot – that is, where a shot ends and where the next shot begins must be found and entered in the database. The in and out points are where the program will first go to find the shot and the relevant information the user wants for that shot. The filling-in of the database fields is where the major analytic work is done; we have to make the same judgments, the same analytic insights, and apply the same methodologies as we would writing an article or a book. The difference is a kind of fragmentation and the adoption of a more aphoristic style than the critical writer is used to. At the same time, the writer has to be aware of a thread of analysis running through all the database entries and make sure that the keywords that the user may want to search on appear wherever appropriate.
The ability to search is a major advantage of a database: if the user wants to find any place in the film where a word is used, a color combination is present, a narrative element can be found, she should be able to enter that word, click a button, and have a list pulled from the database. Clicking on any entry in the list will bring up the accompanying shot.
The search function opens up an entirely new problem in film analysis. We are, in effect, giving over some critical work to the user of the program. This opens up difficult critical and functional questions, the most striking of which is: what is the reader going to want to search for? Do we second-guess when we create the database, or create a list of keywords that we know the user may want to search on, and then make certain that they appear in the appropriate fields of the database? Do we provide such a list in the program itself, perhaps as part of the introductory apparatus, thereby suggesting what the user might want to find? When we go to a Web search engine, like Google, we know more or less what we are looking for, even though what we get may not precisely fit our query. Going to the interface of a film database and guessing what we want to find, is a problem of a different order. We have, somehow, to present a critical apparatus within the interface that discusses the program's intent and offers some guidance for the user. As much interactivity as we provide, we must provide as well a guide to the critical narrative we are striving to get the user to put together.
We have, then, discovered a second major advantage of the use of the computer in film studies. The first was the ability to quote from and analyze a given sequence of a film. The second is the database, as huge as one wants, and as fine-tuned as one needs, full of ideas, information, analysis, and offering the user the ability to connect the data to the precisely relevant images within a complete film.
What we need, finally, to make this complete is a program that searches images themselves. We can tag images in the database and search that way. In other words, we can describe the image content, color, composition, and narrative relevance, and allow the user to choose these and bring up the related images or shots. A new codec, MPEG-7, promises the ability to tag the image itself. But all these are still text-based. We have to write out a description for, or appended to, the image and then search for it by entering the keyword. There is software available to search still images by example: that is, by clicking on one image, other images with similar colors or shapes will be called up. These are complex applications, not yet easily available to a film scholar.
Searching moving images is another matter still. The ability, for example, to search a database of Short Cuts for zoom shots of specific kinds, based merely on clicking on an example of that kind, would open up new avenues for studying a film's textuality, an auteur's style, and, most important, begin to enable us to understand the structure of cinematic representation itself. Software to search moving images is slowly being developed, although a researcher at Kodak told me "not in our lifetime." But that was six years ago.
1 Another aspect of Mamber's investigations, on the theory and ideology of the surveillance camera, can be found online at <http://www.cinema.ucla.edu/mamber2/>.
References for Further Reading
Branigan, Edward (1984). Point of View in the Cinema: A Theory of Narration and Subjectivity in Classical Film. Berlin and New York: Mouton.
Kolker, Robert (1994). The Moving Image Reclaimed. Postmodern Culture 5.1 (September). At http://muse.jhu.edu/login?uri=/journals/postmodern_culture/v005/5.1kolker.html.
Kolker, Robert (1998). "Algebraic Figures: Recalculating the Hitchock Formula." In Play It Again, Sam: Retakes on Remakes, ed. Andrew Horton and Stuart Y. McDougal. Berkeley: University of California Press.
Mamber, Stephen (1998). Simultaneity and Overlap in Stanley Kubrick's The Killing. Postmodern Culture (January). At http://muse.jhu.edu/login?uri=/journals/postmodern_culture/v008/8.2mamber.html.
Manovich, Lev (2001). The Language of New Media. Cambridge, MA: MIT Press.
Miles, Adrian (1998). Singin' in the Rain: A Hypertextual Reading. Postmodern Culture (January). At http://muse.jhu.edu/login?uri=/journals/postmodern_culture/v008/8.2miles.html.
Mulvey, Laura (1975). Visual Pleasure and the Narrative Cinema. Screen 16, 3 (Autumn): 6–18. Reprinted at http://www.bbk.ac.uk/hafvm/staff_research/visuall.html.
Information on copyright is from Lolly Gasaway, University of North Carolina http://www.unc.edu/~unclng/public-d.htm. David Green provided further information on the Bono Act and treats it and many other IP issues in Mary Case and David Green, "Rights and Permissions in an Electronic Edition", in Electronic Textual Editing, ed. Lou Burnard, Katherine O'Brien and John Unsworth (New York: MLA, forthcoming).
Among many sources for information on IP law and policy are Ninch (the National Initiative for a Networked Cultural Heritage), at http://www.ninch.org, the Digital Future Coalition, http://www.dfc.org, and the American Library Association, http://www.ala.org.