Designing Non-verbal Interactions in Mixed Reality Narratives
Devices that integrate the immersiveness of VR into an AR platform have been called Mixed Reality (MR) tools.
VR - Virtual reality immerses the user in a digitally simulated, entirely mediated world.
AR - Overlays mediated objects in the user's real world context.
One way to mix these realities is to integrate real world physical elements in the digital environment in real time.
AR and VR have different capacities.
AR can transform the real world, whilst VR can transport users into transformative digital worlds.
This article outlines strong concepts for non-verbal MR interaction design.
Strong concepts are working concepts that theoretical framework for identifying generative design knowledge. e.g. MR requires the dynamic interaction between the user's position, device orientation and the immediate physical reality to display content...so the rendering of digital objects in MR is not static
STRONG CONCEPT: Real Time Mapping refers to the ability to chart and track movement in space and it is this functionality which creates the setting of an MR narrative.
The real time mapping system involves the digital assessment of physical reality in order to output augmented, or mixed reality in that space. It needs to understand the dimensions of a space, the user's location, and the location of any physical objects that might interrupt movement.
STRONG CONCEPT: The fidelity and detail of the input data can be manipulated by the MR designer to suit the needs of their experiences and interfaces.
Beyond mapping, systems can also implement object recognition to recognize, categorize and assign interactive elements to objects.
STRONG CONCEPT: Object recognition allows designers to amend the essence of objects and transform them into interfaces e.g. enhanced information about physical buildings available via point and click effects changes those building into digital buttons. A piece of paper with an AR tracking image that animates children's drawings becomes a performance stage...the drawing becomes a performer, the enhanced paper a stage.
STRONG CONCEPT: MR interaction is not limited to digital objects. Movement and gestures can also trigger MR events. Location awareness and generative exploration are therefore tools to transform the experience of exploring space and augment the emotive aspects of that journey for the sake of the story e.g. transforming billboards in Times Square in to artworks.
Since people react contextually to their environments, dodging a speeding car or jumping over a pothole, the way which a person moves through spaces can impact how they understand those spaces. Similarly, the user's kinaesthetic relationship to the MR environment gives them a sense of influence in that space. e.g. The AR version of Facade assessed user's movements. If they appeared to be erratic Trip asked them if they wanted to leave. In ARQuake, a first person shooter experience MR events were triggered by the user's proximity e.g. doors would open.
STRONG CONCEPT: Touch interfaces do not support real world interactions. They are designed, digital interactions. Interaction that activate objects as interfaces, turn them into dispatchers that communicate relative data to a narrative generation system.
GENERATING A STORY THROUGH THE DESIGN OF INTERACTIONS
The MR author's role is to develop the content of the story world. Authorship involves writing the rules by which the story is told as well as the text itself. This places focus upon access; how interacting with the content will drive the generation of the story. Non-verbal interaction design is focused on how the story is perceived and how gestures can generate its progress.
Whereas authors have control of the environment and interactions available in a VR world, that is not possible in AR, which requires a different approach that puts the user and their interactions at the center of the story-making process. e.g. there is no way to guarantee that a MR retelling of the Jungle Book story takes place in a jungle. Unable to develop the setting, MR authors can only focus on characters, objects and their relationships with the user.
Other potential narrative MR authoring mechanisms:
FREE HAND INTERACTION: Made possible by placing a 3D coordinate grid in the palm of the hand which is tracked via the MR device's camera. This allows for near space interactions in which a user is able to interact with nearby digital or physical objects. e.g. In the Lyra VR app users are able to grab, push, point, and manipulate digital buttons to create musical compositions.
FINGERTIP RAYS: Fingertip rays are for far space interactions. Ray casting links a 2D point of interaction, such as touching an iPhone screen, into a 3D space to hit a digital object. In MR, rays are cast by detecting the finger, its orientation and depth in the environment. In many AR first-person-shooter games the abstract touch on screen may appear as ammo being shot, in RPGs as a spell being case, or in MR as simply pointing to and activating an item.
STATIC AND DYNAMIC GESTURES: Static - angles between the fingers do not move over time e.g. pointing, grasping; Dynamic - the angles between fingers change e.g. waving which can begin a conversation (i.e. utilizing body language). Lots of MR interactions involve both elements. Consider for example, an AR urban planning application. Users are able to dynamically pick up a building, statically hold it, and then dynamically place it. MR narratives assign story content to these different kinds of gestures.
Fisher, J.A., 2016, November. Strong Concepts for Designing Non-verbal Interactions in Mixed Reality Narratives. In International Conference on Interactive Digital Storytelling (pp. 298-308). Springer, Cham.
The USW Audience of the Future research team is compiling a summary collection of recent research in the field of immersive, and enhanced reality media