Chapter 7 – The participative documentary through the lenses of the Live documentary

Participatory culture and interactive documentary


Download 13.17 Mb.
Size13.17 Mb.
1   2   3   4   5   6   7   8   9

Participatory culture and interactive documentary

It has been seen in chapter 3 that the role of the filmmaker as a subjective observer, and the opening of video production to amateurs, does not have its roots in YouTube or Web 2.0, but that it is the result of a cultural, scientific and technological context that has repeatedly questioned the authority of the author/filmmaker/scientist throughout the whole of the 20th and 21st century. The ‘camcorder cultures’ of the 1990s (Dovey, 2000), the culture of ‘vernacular video’13 (Burgess & Green, 2009:25) and the avant-garde dreams for an open video language (Sorenssen, 2008) are seen by media theorists Dovey and Rose as the main influences of a ‘situated documentary aesthetic’ (Dovey and Rose, n.d:3) that seems to say ‘I was there’, ‘I experienced this’, ‘I saw that’ (ibidem) rather than ‘this is how it is’. Collaborative sites such as YouTube, Flickr and Wikipedia, are therefore flourishing because they channel a cultural need that was ready to be expressed, and not because they have engendered such need. Media theorist Jenkins, in What Happened Before YouTube, reminds us that it is ‘the emergence of participatory cultures of all kind over the past several decades’ that have ‘paved the way for the early embrace, quick adoption, and diverse use of such platforms [as YouTube]’ (2009:109), and not vice versa.

This being said, as in any dynamic relation, the communication logics afforded by social media have increased our abilities ‘to share, to cooperate with one other, and to take collective action, all outside the framework of traditional institutions and organization’ (Shirky, 2008:21) and, by making it so simple for the individual to contribute to group effort, they have created the condition for a “participatory culture” (Jenkins, 2006:3). Participatory culture, states Jenkins, ‘contrasts with older notions of passive media spectatorship. Rather than talking about media producers and consumers as occupying separate roles, we might now see them as participants who interact with each other according to a new set of rules that none of us fully understand’ (ibidem).

What interests me in this section is to unpick the types of participation that, as Jenkins notices, “none of us fully understand” yet (Jenkins, 2006:3), and to situate them in the context of interactive documentary production. Behind this approach there is the assumption that participation in creating software (Linux) is not the same as participation in creating an online encyclopaedia (Wikipedia) or a participative documentary (Mapping Main Street). If sometimes the strategies of collaboration (open source, crowd-sourcing, peer-reviewing, user-generated content etc…) are similar, the results are very different because they can influence different moments of the creation of the digital artefact and they feed into media and forms, which all have different affordances and constraints. Linux, Wikipedia and Mapping Main Street are all fed by UGC, but the way such content is used is different because crowd participation in creating an encyclopaedia entry is not the same as peer-participation in software development, and helping de-bugging an operating system is different from helping editing a movie (as in Rip: a Remix Manifesto). Although Linux, Wikipedia and Mapping Main Street are all digital artefacts they have different purposes, aesthetics and success standards: a software needs to run, an encyclopaedia needs to be trusted and a film needs to have a grabbing narrative; they are comparable only to a certain extent.

Terms such as crowd-sourcing, open sourcing and user-generated content are not clearly differentiated when it comes to their application to interactive documentaries. Trying to make some sense of those collaborative practices while analysing participatory documentaries Life in a Day and Man with a Movie Camera: a Global Remake, collaborative documentary specialists Mandy Rose writes in her blog14: ‘How do we delineate crowdsourcing, collaboration and co-creativity in these works? How do we understand a shared process of meaning making? Is participation in these projects a good in itself? How do the process and the finished product interrelate? (…) These are complex questions, without ready answers’. (Rose, 2011, September 20th).

For me the confusion between crowd-sourcing, collaboration and co-creativity comes from the fact that they are often used as generic synonymous of participation. What those terms have in common is a bottom-up approach15 to cultural creation, but they differ on how such creation is reached as they have different origins. As we will see in the next section “peer-sourcing” and “open sourcing” do come from the world of software hackers, while “user generated content” comes from the world of social networks, bloggers and Wikipedia feeders. None of those comes from the realm of video production which means that, in order to understand how they can be applied to interactive documentary we need to understand what they meant in their context of origin and how they have been applied to the affordances and constraints of video production, and documentary language.

From open source code to open source documentary

The term “open source” was coined in 1998 when technology publisher Tim O'Reilly organized the Freeware Summit to find a new name for what had been previously called “free software”. Open source is therefore the result of a strategic rebranding that promotes a way to develop software that existed since the late 60’s16. As elegantly summarized by Tapscott and Williams in Wikinomics open source code basically follows this motto: “nobody owns it, everybody uses it, and anybody can improve it” (2008:86).

In Rebel Code17 Glyn Moody explains how important it was for certain hackers to officialise open source and therefore to have clear licences and modes of use18. Eric Raymond proposed to refer to the Bruce Perens’ Debian Free Software Guidelines. Those guidelines19 made it clear that “open source doesn’t just mean access to the source code”. The distribution conditions of an open-source programme must comply with nine criteria which are there to guarantee the free distribution of the derived versions20. Anybody can modify a source code, adapt it to a specific market need, and still copy and distribute it freely – even for commercial gain21. This is meant to give an incentive to programmers who wish to modify a programme: they can freely copy their version and distribute it without having to pay copyrights to the original software owners, but their version needs to be available to all22. Hackers were convinced that the most effective way to achieve reliability in software was to push its source code for active peer-reviewing; ‘secrecy is the enemy of quality’ (Raynold, 2004:3).

The open source definition deals with the criteria of distribution of such software, not with the way in which the programme has been created. The culture of free hacker collaboration that has emerged through the creation of open source and free software is the result of a methodology of work that programmers such as Torvalds23, Stallman24 or Murdock25 have created: using the internet to post messages to the hacker community one programmer would inform about his project and people would voluntarily help and participate. More than crowd-sourcing, this is peer-sourcing within a highly specialised community: hackers.

This collaborative effort has proven to work very well in a relatively small, and highly skilled community, such as the hacker’s one (where there is a common passion, a sense of belonging and where respect and reputation are important26). But could this model of peer-production work in other areas than software and in other communities than programmers?
When filmmakers started making the parallel “source code in software equal video rushes in documentaries” they started adjusting modes of production coming from different realms. Uploading rushes on the internet was interpreted as making them available to other filmmakers so that they could use them in other productions, or to re-edit the original film.

Dancing to Architecture, by Leroy Black and Kristefan Minski, is to my knowledge the first documentary directly inspired by open source ideology. Shot in 2002, interestingly enough just one year after the Creative Commons was founded, Dancing to Architecture is a film about the Australian This Is Not Art festivals (TINA) - held in Newcastle, every year in October. During the festival people used any possible video format (Mini DV, Digital 8, Video 8, Hi8, DVC Pro, and Webcams) and covered the events of the festival. The 140 hours of interviews, presentations and workshops, events, exhibitions, performances and time-lapse recordings where then edited into an art film27, but they were also uploaded into an internet archive28 where anybody could use the footage freely for their own productions – or create a re-edit of the film. With a budget of AU$1000, and before the establishment of Web 2.0, the first open source documentary had been made.

But what is “open source” about it? The documentary itself was made like any other low budget documentary: a lot of participation from friends and volunteers to create a final piece which was edited, like any other linear film. The authors of the movie retained their role of shapers of the film. What was perceived as new, back in 2002, was that the rushes were not considered property of the people who shot them. The authors were not claiming the sole use of their images: the interviews and all the video rolls were made available for others to use in re-mixes or in other productions29.

Fig 1. Dancing to Architecture online archive. Available at, retrieved 20.10.10.

The parallel that was made was: source code (in software) equal raw shots (in film). But this parallel works only to a certain point. Source code in software is not just the equivalent of un-edited rushes. The code of a software has an order, a grammar, that makes it “run”, and therefore “work”. Code does have an aesthetic, as one line of code can be more elegant than another in achieving the same goal, but ultimately the goal is to be read by the machine and to achieve a pre-set task. A documentary also has shots positioned in a certain order, a grammar, that makes it “work”, but it does not “run”, it expresses a point of view and it needs to “work” visually rather than practically30. Elegance and style, in film language, are creating meaning for the viewer, they are part of the form itself. Edits are not there to say to the projector “go to the next frame”, they are not lines of code to be executed, as they create moods, emotions and ultimately meaning for the viewer. Because shots are to be seen, their juxtaposition becomes the voice of the author, as the choice of such shots, in such order creates the message and mood of the documentary. But code is to be executed by the machine and not by end user. The end user does not need to have access to the code of a software to know if it works, while an audience needs the shots of a movie to make sense of the voice of the author. This makes all the difference: while one might be motivated to make something run better, it is quite difficult to feel the urge to alter someone else’s point of view by tweaking, or adding, to his/er shots and discourse. In Dancing to Architecture people could, in theory, remix the movie, add their own shots, and create a “better” version … but in reality why would they do so? This option staid as a potentiality but what got used was the free pool of material made available on the web for new projects. Open video was understood as free rushes, not as a collaboration within a single film.

Dancing to Architecture illustrates well the passage from open source code to open source narrative content. Although the film material is made free, the participation of the viewer is not influencing or transforming the “original” movie. Participation here follows an open source logic in the sense that new versions can be made, and material can be used, but the original film stays intact. This would be the equivalent to a software that is made available for people to use freely, to incorporate into other open source softwares, but with no possibility to change it. After a few documentaries released with this logic31 the collaborative options of Web 2.0 inspired a participation that was involving people during the production process, and not only once the production was finished. This is the beginning of video participation understood as a way to influence ‘the processes of documentary production’ (Dovey and Rose, n.d:1) rather than using its rushes.

Around 2004 filmmaker Brett Gaylor began working on a participatory project where people could not only share resources but collaborate on the film production itself. Coming from a new media background, Gaylor was one of Canada’s first videobloggers. He wanted to go beyond the idea of free sharing of rushes so he created the website Open Source Cinema32 where he encouraged people to participate to his feature documentary: RiP: A Remix Manifesto. In his website Gaylor describes RiP as “an open source documentary about copyright and remix culture”33 – with particular interest in the charismatic remix DJ Girl Talk.

Fig2. RiP: A Remix Manifesto home screen.

Available at, retrieved 20.10.10.
It took six years for the film to take a finished shape34 and Gaylor claims that it is the result of hundreds of people who have contributed to his website. But how did this collaboration really work? Gaylor is the first one to admit that the collaboration logic changed throughout the years35; it evolved through trials and errors. At the very beginning of the project Gaylor was uploading the rushes of the interviews he was doing, and he was just asking people to remix them. This did not work, no one knew about his project, and no one seemed to be interested in spending time remixing it. Crowd-sourcing the masses did not seem to work. Gaylor then tried to tap directly into the re-mixer community, searching for the most talented ones via YouTube. Following Jeffe Howe’s categorisation of crowd-sourcing36, Gaylor was now crowd-sourcing ‘the professionals’ (Howe, 2006:1) which is to say that he was peer-sourcing within a selected crowd of enthusiastic re-mixers. In a certain way he did what hackers do: identify the experts and ask them to participate to a project. He identified some talents and approached them via what he calls a “contest logic”: challenging them to re-edit something better than him. This proved to be successful: a small community was now engaged in helping in a documentary about remix culture. They would communicate by e-mail and have a close relationship. Gaylor says that what he learned is that one needs to create different levels of participation, as the hardcore collaborators are very few. What seemed to work particularly well was to edit a segment, post it to the community, and then ask people to “fill the gaps” or to do a specific task. Gaylor here was clearly following Torvalds’ benevolent dictator’s strategy of collaboration, where all the decisions were made by him, but expert peers could collaborate on precise tasks37. Gaylor’s attempt to introduce participative logic in his documentary is limited by the final form of the documentary itself: a linear film, which needs to respect the rules of narrative coherence. The viewers can help in the process, but they cannot own the form.

When asked why he stayed so much in control of RiP Gaylor answered “because it is my movie, I take responsibility for it”38. Although he believes in the power of collaboration he does not think that leading by consensus works. Even in free software, he says, someone has to have a final say to avoid ‘forking’39. For him collaboration was a way to “keep the project honest, and to improve it”. People did make his film different from what he would have made alone and they also provided a sort of guarantee that he would not deviate too much from their remix ethos and beliefs. But Gaylor is very clear that he had to keep editorial control as for him “open sourcing software is not the same as open sourcing a cultural project”40. The two simply do not work in the same way. The assumption behind Gaylor’s answer is that a movie can only be collaborative to a certain point, as the author always have to have the last world and express his/er point of view. Going further in this analysis one would have to deduce that a software can be the result of a vision but that it does not express a personal point of view.

A documentary that would follow a crowd-sourcing logic of participation, following the Wikipedia’s example that will be explored next, would have to accept crowd-reviewing, rather than single authorial editing. Such a documentary will probably lose its narrative coherence – normally linked to its author’s voice - and would therefore assume a rather fragmented aesthetic. The mosaic aesthetics of 6 Billion Others, Participate and Life in a Day’s Interactive Gallery – where there are a multitude of videos, audio or written fragments sent by users that are only browsable as independent entities accessed through a common interface – can only be explained as an aesthetic of the multiple, where coherence is not given by authorial narrative but by the journey of the pro-sumers. As we will see in the next section when a documentary fully embraces a mass crowd-sourcing logic the role of the author has to move from “narrator of a story” to “facilitator of other people’s stories”.

If we are used in documentary praxis to see the filmmaker as the author that proposes her “creative treatment of actuality” (Grierson) by filming and editing shots, and effectively creating a text, in interactive participatory documentary the author can play with both content form, and structure. She can design an interactive framework that will held content she has no control of, because it comes from others, or keep some level of control of the narratives she wants to develop. Authorship therefore is not only understood as the creation of a text, it gets extended to the creation of a structure of interaction, where the user has power of navigation and interpretation and where a new entrant, the content participant, can create sub-narratives, within a wider framework. The question then becomes: is crowd-sourcing leading towards the death of the author, or just altering our understanding of such term?

Share with your friends:
1   2   3   4   5   6   7   8   9

The database is protected by copyright © 2019
send message

    Main page