T_Visionarium was created for the UNSW iCinema Centre’s Advanced Interaction and Visualisation Environment (AVIE), and it offers the means to capture and represent televisual information, allowing viewers to explore and actively edit a multitude of stories in three dimensions. For T_Visionarium, 28 hours of digital free-to-air Australian television was captured over a period of one week. This footage was segmented and converted into a large database containing over 20,000 video clips. Each clip was then tagged with descriptors-or metadata-defining its properties. The information encoded includes the gender of the actors, the dominant emotions they are expressing, the pace of the scene, and specific actions such as standing up, lying down, and telephoning. Dismantling the video data in this way breaks down the original linear narrative into components that then become the building blocks for a new kind of interactive television.

T_Visionarium has been developed through three iterations. T_Visionarium I (2004-2005) undertook theorisation into multi-stream narrative using robotic technologies in a Dome visualisation setting. T_Visionarium II (2006-2014) investigates the navigation and recomposition of a complex narrative database within AVIE’s 360-degree 3D setting. T_Visionarium III (2015-2017) provides the ability for users to save their recomposition for latter processing within AVIE’s 360-degree 3D setting.

Two hundred and fifty video clips are simultaneously displayed and distributed around AVIE’s huge circular screen. Using a special interface, the viewer can select, re-arrange and link these video clips at will, composing them into combinations based on relations of gesture and movement. By these means, the experience of viewing the television screen is not so much superseded as reformatted, magnified, proliferated and intensified. It is the experience of this new kind of spatial connectivity that gives rise to a revolutionary way of seeing and reconceptualising TV in its aesthetic, physical and semantic dimensions. To use the T_Visionarium apparatus is not to view a screen or even multiple screens, but to experience a space within which screen imagery is dynamically re-formulated and re-imagined.

T_Visionarium actively and ongoingly explicates television but most importantly, it engages the domain in which it operates. Here, media is not an object of study but a material landscape in which we are component parts. T_Visionarium is a useable technology that locates us within a mediascape and makes us actutely aware of its operations, uncovering a televisual vocabulary of gesture. Stripped of its conventional narrative context, the aesthetic, behavioural and media qualities of television become strikingly apparent. And by affording us an active involvement, T_Visionarium hones both our awareness of and our dexterity with this media.

In essence, it is not so much a tool that delivers control of a mediascape but a mode of inhabiting our surroundings: a sphere of pure and endless mediality. In this and many other ways, T_Visionarium is a moment in the history of media: post cinema, post narrative, new media, but at the same time, a major study in television and an embodiment of a new media aesthetics.

Jill Bennett

(Cf: Jill Bennett, T_Visionarium : A User’s Guide, ZKM/UNSW Press, Karlsruhe/Sydney: 2008)

  • T_Visionarium is an interactive narrative project investigating the exploration of a database of video streams that are recombined through the interaction of viewers, an interaction which allows new emergent narratives to be generated.

    The interactive architecture of digital technology provides a fresh opportunity for reformulating the role of narrative within cinema. Current experimentation in interactive narrative is handicapped by under theorization of the role of time and critical attribution in virtual environments. We know, for instance, that digital architecture is multi-modal. We also know that multi-modal artifacts are shaped by software rather than semiotic codes. Software compresses information into virtually realisable and interpretatively thick units of meaning. Notwithstanding the use of digital animation in conventionally scripted cinema, which laboriously render graphic images from scratch, such as Shrek, or even in post-production enhancements such as Waking Life, the multi-modal information delivered to producers of digital cinema is already condensed into cultural tokens of text, sound and image at the point of contact. For this reason the metaphors of production in digital cinema borrow from images of montage, layering and re-assignment rather than from fabrication. The manipulation of culturally prefabricated information in digital media rehearses the long-standing artistic tradition of transcription. In this tradition the artist is presented with a body of informational resources or cultural goods which they reassemble in the process of creation. Thus the roles of the artist and viewer in a transcriptive model of cinematic production are editorially intertwined.

    The project presents an experimental framework which maps the transcription of televisual information as a form of recombinatory search. The re-enactment of televisual information has the potential for allowing a multiplicity of significant differentiations or fissions to occur within the original data. The great mass of televisual information is already received indirectly and sorted by the viewer in episodic memory. Television is encountered through techniques such as – channel hopping, muting, and multi-screens, through multiple association in different contexts, or fragmented through time-delay and by report. Thus even though television broadcasts may begin as purposeful artifacts, their meaning for the viewer is not exhausted by critical recovery of their producer’s original intentions. Rather, their meaning is revitalized into temporal, directional, and irreversible narrations, transcribing the functions such information is felt to cause and can be shown to perform. Transcriptive narrative dramatizes the world instead of freezing it into schematic representations. It transforms the cinema into a kind of Platonic cave wall onto which viewers project, then respond to, the episodic shadows of their journey through cultural information. It is only insofar as digital technology makes multi-modal transportation of data within virtual time a practicality that the aesthetic potential of interactive narrative can be put to the test.

    Project objectives:

    • to explain transcriptive narrative as the viewer generated manipulation of duration and movement in the reflexive reassignment of eventfulness to multi-modal cinematic information;
    • to test the function of transcriptive narrative within two demonstrator virtual environments – entitled T_Visionarium I and T_Visionarium II – through the experimental application of recombinatory algorithms in the digitized search of televisual information. The experimental design brings together three existing interactive models that, in concert, enable the eruption of data into multi-temporal narrative forms;
    • to evaluate the aesthetic significance of interactive narrative as a product of cultural transcription in digital cinema.
  • The advent of Internet search engines such as Google have demonstrated the informative power of computer aided navigation of massive text and still image databases. Anyone who has used these search engines experiences the remarkable outcomes of keyword searches that assemble clusters of information and open countless paths of exploration in cyberspace. A fictional rehearsal of this process is rendered in Steven Spielberg’s film Minority Report where a number of scenes present the simulation of a semi-immersive information space that enable such search procedures within moving image material.

    Example of statistical distribution

    T_Visionarium makes this vision a reality, creating a wholly immersive information space where the viewer can interactively explore and link a vast database of video clips that are derived from multiple broadcast television sources. It expresses the artistic potential of such a system, by embodying a tagging architecture that extends beyond the mere keyword hierarchies of similar topics that are to be found in conventional digital video archive systems. In particular, T_Visionarium II and T_Visionarium III enable the viewer to navigate within a cluster of similarities and so assemble a unique sequence of video events that share certain identities, while at the same time triggering the rearrangement of that cluster as soon as the viewer moves to a different clip. By shifting their attention to other clips at a greater distance the viewer generates completely new arrangements of the material. The result is a fundamentally dynamic system of narrative interaction that is being continuously fine tuned as the viewer navigates the data space. In this process there is a continuous narrative reformulation. On the one hand the narrative is determined by the ordering of the tagging architecture. On the other hand, the narrative is completely open to reassembly in totally unexpected emergent sequences according to the individual pathways undertaken by the viewer.

    In effect T_Visionarium is an ultimate auto-creative authoring system offering the viewer a real time editing tool that operates in tandem with algorithmic processes to generate an infinitely varied self-organising stream of narrative events. While the current embodiment of T_Visionarium uses a database of televisual materials, the concept and implementation is applicable to any kind of cinematic content. It prefigures a future where powerful home computing resources and large screen displays will permit the recycling and repurposing of broadcast television, as well as any other source of recorded audio-visual material. In this way, T_Visionarium demonstrates a whole new genre and culture of media ecology.

ARC InvestigatorsDennis Del Favero, Jeffrey Shaw, Neil Brown, Peter Weibel
Project DirectorsNeil BrownDennis Del Favero, Jeffrey Shaw, Peter Weibel
Programmers: Matthew McGinity, Balint Seeber, Jared Berghold, Ardrian Hardjono, Gunawan Herman, Tim Kreger, Thi Thanh Nga Nguyen, Jack Yu, Alex Ong, Som Guan, Rob Lawther, Robin Chow, Tim Kreger
Project Funding: ARC DP0345547
2004-2017

  • T_Visionarium IV, ‘Art of Immersion’, ZKM Media Museum, Karlsruhe, Germany, 2017
  • T_Visionarium IV, ‘Aphasia – Düsseldorf & Cologne Open’, Galerie Brigitte Schenk, Cologne, 2017
  • T_Visionarium III, ‘Jeffrey Shaw & Hu Jieming Twofold Exhibition’, Chronus Art Center, Shanghai, China, 2014
  • T_Visionarium III, ‘STRP Festival’, Strijp R Klokgebouw, Eindhoven, Netherlands, 2010
  • T_Visionarium III, ‘Inaugural Exhibition’, ALiVE, CityU, Hong Kong, China, 2010
  • T_Visionarium III, ‘International Architecture Biennale’, Zuiderkirk, Amsterdam, Netherlands, 2009
  • T_Visionarium III, ‘Un Volcan Numérique’, Le Havre, France, 2009
  • T_Visionarium III, ‘Second Nature’, Aix-en-Provence, France, 2009
  • T_Visionarium III_, ‘YOUniverse’, ZKM, Karlsruhe, Germany, 2009
  • T_Visionarium III, ‘eARTS Festival: eLANDSCAPES’, Shanghai Zendai Museum of Modern Art, Shanghai, China, 2008
  • T_Visionarium III, ‘Biennial of Seville’, Alhambra, Granada, Spain, 2008
  • T_Visionarium III, ‘Sydney Festival’, UNSW, Sydney, 2008
  • T_Visionarium II, ‘YOUser: Century of the Consumer’, ZKM, Karlsruhe, Germany, 2007
  • T_Visionarium II, ‘Panorama Festival’, ZKM, Karlsruhe, Germany, 2007.
  • T_Visionarium I, ‘Artescienza: La Rivoluzione Algoritmica, Spazio Deformato’, Casa dell’Architettura, Rome, Italy, 2006
  • T_Visionarium I, ‘Avignon Festival’, Avignon, France, 2005
  • T_Visionarium I, ZKM, Karlsruhe, Germany, 2004
  • T_Visionarium I, ‘Cinémas du Futur, Lille 2004 Capitale Européenne de la Culture’, Centre Euralille, Lille, France, 2004

Directors: Neil Brown, Dennis Del FaveroJeffrey Shaw, Peter Weibel

Lead Software Engineer: Matthew McGinity

Distributed Video Engine: Balint Seeber

Application Software: Jared Berghold, Ardrian Hardjono, Gunawan Herman, Tim Kreger, Thi Thanh Nga Nguyen, Multimedia and Video Communication Research Group (Dr Jack Yu) NICTA, Alex Ong, Som Guan, Rob Lawther, Robin Chow

Audio Software: Tim Kreger

This project was supported under the Australian Research Council’s Discovery funding scheme, produced by iCinema Centre and co-produced by ZKM, Karlsruhe.