Mobile Five Senses Augmented Reality System: Technology Acceptance StudyAs part of their bid to integrate a multi-sensory device with museum visitor mobiles researchers created a theoretical model of new technology acceptance, tested with an app that they developed which detects museum objects using the camera phone, linked to a small Bluetooth extra-sensory (smells, temperature, vibration) access device that museum visitors can take around with them.
Extended sense contents were created for the museum’s most important objects (masterpieces), i.e., text, audio, video, touch (cold, heat, vibration), smell, and taste. The app has 4 main features: (i)the detection and recognition of museum objects (ii)the detection, recognition, and tracking of objects as the user moves along the museum, allowing to touch, different areas of the objects displayed in the mobile screen and showing information about that region, as well as relevant smells. (iii) the detection and modelling of the museum's walls, and subsequent projection of information/contents (e.g., images, movies, or text), related with the objects' epochs (iv) the detection of persons moving in the museum and, for instance, dressing them with clothes from the exhibition's epoch. In order to save battery power, the camera is only activated in the app when the AR option is selected in the interface. The app also tracked users through the museum, recommending objects of interest nearby. To help explain and predict user acceptance of this device the researchers developed the acceptance and use of technology (UTAUT2) model which the researcher’s findings suggest explains approximately 70% of the variance in behavioural intention and 50% of the variance in technology use, being composed of seven constructs: (i)performance expectancy (expectations about what the app offers, or will be able to do) (ii)effort expectancy (how easy, or difficult it appears) (iii)social influence (the extent to which this is perceived as a social activity) (iv)facilitating conditions (v)hedonic motivation (such as the intention to try the app) (vi)price value (vii)habit. FINDINGS - In terms of behaviour intention, friends, family, and influencer opinions contributed to use of the App. - Effort expectancy, denoted by user’s perception of their skill and ability to use the app was also influential. - In terms of the way that the app was used, facilitating conditions have more impact, followed by the behaviour intentions, where users believed that they had the resources and the knowledge necessary to use the App. - Also, cross compatibility with others technologies contributed to its acceptance and usage, e.g. the ability to search for additional information about museum's objects and about auxiliary services. In this case expectations about the way that the app should perform did not significantly influence behavioural intention. The greatest impact related to effort expectancy and facilitating conditions. Rodrigues, J.M., Ramos, C.M., Pereira, J.A., Sardo, J.D. and Cardoso, P.J., 2019. Mobile Five Senses Augmented Reality System: Technology Acceptance Study. IEEE Access, 7, pp.163022-163033. Designing interactive narratives for mobile augmented realityThis paper outlines audience research of early prototypes for mobile AR narrative games where users were invited to co-create a location based narrative using AR.
Noting that real world objects can stimulate user's imagination, the designer's created a system to encourage user's to create their own stories in relation to real sites. Design guidelines included 1) menu minimisation, enabling users to jump directly into stories 2) carefully constrained user freedom 3) choice of interaction metaphors to support cross-space flow 4) encouraging joy by emphasising playful discovery DISCOVERY - In this mobile AR game design users could explore and uncover hidden parts of the story at random through their phone, which is utilized like a window to the physical world, enabling the discovery of narrativised AR objects. - Only portions of the AR world can be viewed at any one time, forcing the user to construct the overall puzzle pieces in their head. - Users werealso given the option to create their own stories by writing tags for these location linked AR objects. - These tags could be shared and modified by other users in turn. Following these principles the designers created 3 protoptypes 1) In the 1st prototype users could discover AR objects through a scrubbing and dusting action to mimic cleaning that also involved physical mobile vibrations whenever the phone is shaken to 'clean' the AR dirt. 2) The 2nd prototype involved users catching AR fish in prototype augmented ponds. Different ponds and fish were linked to different corresponding words and sentences, which users strung together through the act of fishing. The image on the mobile viewfinder is simultaneously projected on a nearby large display. 3) Users play with augmented comic strip graphics on the wall, adding their own captions and tags. Results: - Users regarded the 3rd prototype as the most interactive in terms of narrative creation - The narrative interactivity of the 2nd prototype was less understood. - Prototype 1 was regarded as the most playful Even middle-aged users enjoying scrubbing themselves, despite weak flow between physical and virtual realms because participants only focused on dirt-scrubbing activity once the virtual body appeared on the mobile screen. Nam, Y., 2015. Designing interactive narratives for mobile augmented reality. Cluster Computing, 18(1), pp.309-320. User-Centred Design of Smartphone Augmented Reality in Urban Tourism Context Other than the consistent finding that users find it easier to read annotations in small, visual bubbles to help differentiate them from the real world background in the viewfinder - little was known about the mobile AR tourist experience prior to this extensive 2015 study. From these tests the researcher devised a
USER-CENTERED DESIGN FRAMEWORK FOR SMARTPHONE AUGMENTED REALITY In essence, the framework encourages designers to (1) match the perceived physical characteristics (e.g. visibility, visual salience, and legibility) of target objects to ensure usable annotations, and (2) predict the information needs of users to enhance utility of delivered content. Interaction triggers with the AR browser and expectations for content - Users interpret visual cues about the purpose and importance of a site, or object, and will not interact with the browser if there are no cues to suggest a site's interest. - If a site is deemed important, they will expect more content about it. - They also need guidance to find sites that are not yet visual in the viewfinder. Association of virtual annotations and physical targets - There has to be at least one (direct or indirect) visual match between the perceived characteristics of the physical target and the AR annotation in order for users to associate them effectively. - If all annotations have the same visual attributes (layout, the same symbols, same names), it will be harder for users to know which annotation(s) refer to the target object. Information needs and queries - AR browsers should prioritise delivery of content about visible targets, because this is what tourists are most likely to react to. - Attention is limited in busy environments and directed towards visual points of interest, thus the information needs of the user will depend on the visibility of physical targets. Partial visibility might also lead to a different set of assumptions about the target physical object, and therefore impact information needs. - If the tourist has learned about the historical and cultural significance of a landmark beforehand, their questions will be different (e.g. Why is this important?), rather than the questions of tourists who do not have this information (e.g. What is this?). Embodied interaction and spatial permanence of annotations Users expect annotations above, or nearby small objects and in the centre of larger spaces. They also expect moving annotations along the length of a street, rather than only in one place. User Requirements for AR Annotations - The perceived function of target objects (e.g. restaurant, historical building) influences the information needs and expectations of tourists. For instance, reviews and ratings are considered necessary, useful and relevant only for specific types of physical objects (e.g. restaurants, cafes, food venues). - Providing information about non-visible targets (e.g. either a short walk away, or inside buildings) can also enhance situation awareness - but it is helpful if the annotations for visible and non-visible targets are different and that difference is readily apparent. - It is helpful if virtual annotations match directly at least one of the perceived visual characteristics of the target object on the screen of the smartphone. - Changes in visibility (perspective, distance) of target objects should be reflected in the representation of the virtual AR annotation in order to ensure efficient and effective association between them. - It is especially important to provide content for points of interest that tourists might have learned about from other information sources and consider important. - Due to user expectations about moving and static target information, different rules need to be set for discrete (e.g. buildings - with explanatory annotations directly above, or next to), continuous linear (e.g. streets, rivers - which may need moving, continuous information) and spatial (e.g. squares - where info can be center screen) entities. - There is generally no need to provide information that can already be visually perceived or extracted from the physical environment (e.g. the name of a coffee shop). - Useful information helps decision-making and micro-time/journey management such as special (unique), interesting and important location information from a tourist point of view. - Tourists ultimately need to acquire information about paths (route knowledge) and the relation among POIs (survey knowledge), therefore it is helpful if they are provided with different location-based interfaces, such as 2D and 3D maps, lists or more traditional tour guide interfaces. - When screen space is limited, users should be able to infer that they will be able to find those answers by interacting with the smartphone display and sequentially accessing further information about the physical target. Design Parameters and Taxonomy for AR Browsers There are three main high-level design parameters that will ultimately impact the usability and perceived utility of AR browsers: (1) abstraction level of base layer (y), (2) abstraction level of attribute layer (x), and (3) amount of information (z). These three design parameters are also inter-connected. A lower level of abstraction means faster and more effective second referential mapping (and overall association of AR annotations and physical targets). Design Guidelines for Smartphone AR Browsers Apart from whole physical structures, various elements of the environment, such as signs, windows, and different architectural elements could attract the attention of the tourist and trigger information needs. Satisfying the information needs of tourists - The primary purpose of the attribute (AR) layer is to capture information that is not present in the physical environment and could not be obtained without the smartphone device. - Users should have access to choices of display information, as well as easy routes to access further information if they wish, but the screen clutter should be minimised. Users rarely read longer descriptions for individual annotations when they had to consult the AR display with extended arm. - A tappable button that switches on and off the virtual attribute layer for non-visible targets (contextual information) might also be helpful. - At the same time, they should be visually salient, attract the attention of the user and increase the desire to learn about the environment. All content should be balanced and merge well with the physical representation of the surroundings (base layer) and the target annotation. - Distance-based filtering in AR browsers is not only an under-utilized function, but leads to difficulties and confusion when users want to reduce the amount of annotations on display. Providing a function that filters out information based on the visbility status of physical entities could prevent such difficulties, save time and be less cognitively demanding for tourists. Ensuring effective association - Abstract symbols can take time and effort to process - Names and keywords can be used if they are physically present and visible from the current location of the user. Pictograms (landmarks) can be used when the target object is a building with a distinctive shape and contour. - Visible graphic variables (e.g. colour, contour) are more suitable to be used as a matching parameter. As long as it is still clear to identify the link, rendering the real world in a non-photorealistic way using photorealistic virtual models or a colour-coding technique (e.g. matching the colour of a semi-transparent overlay with the colour of the annotation) can also be effective linking strategies. - Directional pointers can also help to indicate links between annotations and real world objects. Influence and control over perception of urban environments - Designers can use push-based notifications to guide attention. - Different visualisation techniques that alter the detail of visual panoramas can also be helpful to flag points of interest. - Keywords such as “interesting”, or “popular”, trigger interest and influence the perception towards specific urban entities Yovcheva, Z., 2015. User-centred design of smartphone augmented reality in urban tourism context (Doctoral dissertation, Bournemouth University). nARratives of Augmented Worlds: A survey of early augmented reality fiction “If interactive narrative is ever going to approach the emotional power of movies and drama, it will be as a three-dimensional world that opens itself to the body of the spectator but remains the top-down design of a largely fixed narrative script”
After a review of narrative theory in AR environments - considering the difference between a story (sequence of events) and narrative (the way those events are presented), as well as the likely blurring of fiction (the story) and reality (the mobile environment) in AR, and the broad treatment of the term text in this article to include all forms of communications including hardware and buildings, versus their definition of a medium, which extends beyond the hardware to include "a set of conventions, practices and design approaches that authors make use of to create a familiar and meaningful experience for the user" - the authors survey features of early AR fiction experiences, distinguishing between 1) Situated (local, quick) augmented experiences, 2) Location based narratives (using a few, sparsely located portals in a wider area) 3) World-level AR experiences sited across an entire neighbourhood, or city, or globally that tend to run for longer periods. 1) Situated augmented experiences - AR/Fac¸ade: A portable version of a previous, breakthrough, interpersonal audio-visual dialogue interaction with a warring couple, AR/Fac¸ade allowed users to inhabit the same physical/virtual space as the drama’s main characters, Tripp and Grace, while wearing an AR headset and a portable computer. In order to maintain the illusion the virtual characters could not respond with commands like 'invalid input'. - Three Angry Men (TAM): An interactive experience that allows users to explore the scene from different physical points of view (which trigger character behaviour changes, rather than plot variations). - "inbox": An AR installation that allowed users to enter a shipping container and trigger short stories about the shipping industry by engaging AR markers in the space. 2) Location based narratives - M-Views: Users were encouraged to walk around the MIT campus and encounter a distributed and modular, variable order cinematic narrative, embedded about the campus. - Murder on Beacon Hill: A murder mystery tour of downtown Boston - GEIST: A similar approach is used to tell the history of 17th century Heidelburg - Hopstory: Added the option for users to act in their own timeline and move to different locations throughout the building, adding a layer of time and evolution - The Westwood Experience: The creators used real live actors and physical setups to increase the immersion, alongside computer vi- sion methods for landmark locations. At certain points the actors broke out of character to explain technical aspects of this Nokia research project. - The Oakland Experience: A mostly linear audio only tour of a cemetary, with branching mini-stories around single graves. - [8]: Combined a positional tracking system with a directional microphone to create an unfolding narrative in the changing landscape surrounds of a motor car drive-through (whilst there were some branches, mostly the drive had to be linear, without room for variation) 3) World level augmented narratives - Alternate Reality Games e.g. 'Conspiracy for Good': Involved both online and offline presence with live actors. The master narrative was fixed, but players had the option to change the advent, or order of the next story 'beat' which supported the impression that they were changing the story. These examples often emphasised exploration of real world spaces in order to support both a sense of interactivity and narrative progression. In order to help guide users through this exploration AR markers were used, as well as non-marker signposts e.g buildings and objects that might be more compatible with the fictive immersion. Shilkrot, R., Montfort, N. and Maes, P., 2014, September. nARratives of augmented worlds. In 2014 IEEE International Symposium on Mixed and Augmented Reality-Media, Art, Social Science, Humanities and Design (ISMAR-MASH'D) (pp. 35-42). IEEE. |
Details
AuthorThe USW Audience of the Future research team is compiling a summary collection of recent research in the field of immersive, and enhanced reality media Archives
October 2020
Categories
All
|