Across its 100+ year history, cinema has developed a lexicon of edits – meaning laden codes built around the visual representation of temporal progression from one image to the next. As screen-based media migrates onto handheld devices that add the element of touch to the existing audio-visual interface connecting text and reader, a new language of image transition that incorporates physical gesture is emerging. Unlike the conventions of visual edits, gestural interface has not yet developed as a shared set of conventions, and experimental texts proliferate on the iPhone and similar devices. Swipe, tilt, shake, and tap are becoming related to cut, wipe, mix, and fade, but the conventions that shape their meanings for audiences are still up for grabs.
Ruben and Lullaby (2009) is an interactive fiction for iPhone. Drawing on Bolter and Grusin’s (1999) analysis of new media’s incorporation of pre-existing forms, Ruben and Lullaby can be seen as remediating the shot sequencing of a conventional cinematic dialogue scene between two characters. Ruben and Lullaby adds a gestural twist to a readily recognisable scenario by empowering the viewer to determine when edits occur. By tilting the screen to one side or the other, the program cuts to another shot and an audio cue on the soundtrack is triggered. Building on the conventions of audio-visual editing in cinema, these cuts produce an affect in the viewer which may suggest a range of readings, primarily related to the tempo of the cuts. For example, rapid tilting from side to side produces an equally rapid series of cuts, resulting in a sense of narrative conflict for the user-viewer. Adding levels of complexity to the interaction, the user can produce affects in the characters on screen by either stroking the screen to soothe or shaking it to agitate them. The scenario is algorithmically played through, combining the user’s input and the two character’s affective relationships to each other to produce an outcome that fits within a conventional narrative structure.
The physicality (one might even go so far as to say the violence) of this interaction with the screen can be read as an attempt to break with the traditional fixity of the viewer’s body in relation to the screen (as noted by Lev Manovich in The Language of New Media, 2001), although it is important to note that although the viewer is able to affect the narrative discourse with their body, in order to view the screen a degree of fixity is still required between the eye and the image.
A century after the montage experiments of Lev Kuleshov, algorithmic media is developing a new language of embodied interaction with the text. Gestural human-computer interface adds significant new spatial dimensions to narrative works, as montage forced a reconceptualisation of the relationship of spatiality and temporality of images. For critical media theorists the next step is to dig into how this embodiment of the user-reader opens political possibilities in art, just as montage offered great opportunities for the exploration of a productive liberatory discourse.