Project Details


Scenario undertakes a narrative experiment within a framework that integrates independently proven experimental systems so as to test the hypothesis that co-evolutionary narrative can aesthetically demonstrate levels of autonomy in humanoid characters.

The materials are integrated from established interactive technologies. First among these is iCinema’s world-first 360 degree 3D cinematic theatre AVIE3, which enables the engagement of an experimental interactive framework. This framework incorporates digital sensing, interpretive and responsive systems. The narrative experiment is designed to dramatise distinct behavioral processes, thus probing humanoid autonomy and the cognitive gap between humanoid and human participant. The design is sufficient to provide humanoids with minimal perceptive, reasoning and expressive capabilities that allow them to track, deliberate and react to human participants, with an autonomy and deliberation characteristic of co-evolutionary narrative.

SCENARIO installation view

The framework is structured so as to respect autonomous humanoid intentionality, as opposed to the simulated intentionality of conventional digital games. While narrative reasoning in human-centred interactivity focuses exclusively on human judgments, co-evolutionary narrative allows for deliberated action by humanoids. This involves providing these characters with a number of capacities beyond their rudimentary pre-scripted behaviour: First, the ability to sense the behaviour of participants; second, the facility to represent this behaviour symbolically; and third, the capacity to deliberate on their own behaviour and respond intelligibly.

The humanoids define their autonomy experimentally through their ability to deliberate within a performance context inspired by the experimental film and television work of Samuel Beckett. In this “Beckettian” performance, individual and group autonomy is determined by physical interactive behaviour, whereby characters define themselves and each other through their reciprocal actions in space. The reciprocal exchange of behaviour in this context is sufficiently elastic to allow for the expression of creative autonomy. As Scenario focuses on human and humanoid clustering, its experiments examine the relation between groups of humanoids and groups of humans.

SCENARIO screenshot

Beckett’s research is extended because it provides an aesthetic definition of group autonomy as other-intentional, that is, predicated on shared actions between groups of human participants. In Beckett’s Quad, for example, characters mutually define each other by means of their respective territorial manoeuvres as they move backwards and forwards across the boundaries of a quadrant. Quad is drawn on as a way of aesthetically conceptualising the relationship between spatialisation and group consciousness. For example, in one scene, participants are confronted with several humanoids, who cluster themselves into groups in order to block the ability of the human participants to effectively negotiate their way through the space. The better the humanoids can work as a group, the more effective is their blocking activity.

The type of interaction generates a cascading series of gestural and clustering behaviours, testing and evaluating the network of meaningful decisions by humanoids and human participants as they attempt to make sense of each other’s behaviour.

Technical features

The digital world of Scenario has a number of technical features:

AI System

The AI system is based on a variant of a symbolic logic planner drawn from the cognitive robotics language Golog developed at the University of Toronto, capable of dealing with sensors and external actions. Animations that can be performed by a humanoid character are considered actions that need to be modelled and controlled (e.g., walking to a location, pushing a character, etc.). Each action is modelled in terms of the conditions under which it can be performed (e.g., you can push a character if you are located next to it) and how it affects the environment when the action is performed. Using this modelling, the AI system plans or coordinates the actions (i.e., animations) of the humanoid characters by reasoning about the most appropriate course of action.

AI Interface

A networked, multi-threaded interface interacts with an external Artificial Intelligence system that can accept queries of the digital world state (e.g. character positions, events occurring) and can be used to control humanoid characters and trigger events in the humanoid world. Currently, a cognitive robotics language based on Golog is used, but the AI Interface has been developed in a modular fashion that allows for any other language to be plugged in instead.

SCENARIO screenshot

Real-Time Tracking and Localisation System

The tracking system uses advanced technologies in Computer Vision and Artificial Intelligence to identify and locate persons in space and time. A total of 16 cameras are used to identify individuals as they enter the AVIE3 environment, and maintain identities as they move around. Spatial coherence is exploited, with overhead cameras providing a clear view but without height information, and oblique cameras providing better location information. A distributed architecture over sophisticated network hardware, along with software techniques, provides real-time results. Between camera updates, innovative prediction techniques are used to maintain accurate person positions at incredibly fast rates. The tracking system is also very robust in the presence of multiple people. A real-time voxel reconstruction of every individual within the environment leverages the tracking system to considerably speed up the construction of the 3D model. A head and fingertips recognition and tracking system allows users to interact with the immersive environment through the use of pointing gestures.

Animation Interface

A custom software toolset iC AI functions as a virtual laboratory for constructing humanoid characters to be used in narrative scenarios. It enables characters to appear within the AVIE space to exhibit a high level of visual quality, with realistic human-like animations. This includes the ability to instruct characters at a higher programmatic level (walk to point, looking at objects, turning, employing inverse kinematics, rather than individual joint commands), and the ability to schedule these behaviors to produce believable, fluid characters.

Mixed Reality System

A custom 3D behaviour toolset AVIE-MR allows the creation of ‘scenarios’ that exhibit a cycle of cause and effect between the real world and the digital world. Its principal feature is that it allows the cognitive robotic language to implement realistic behaviour in the humanoid characters and a minimum of programming effort. This ensures enhanced levels of human interactive and immersive experience.