CO-CREATING THE AUDIENCE OF THE FUTURE
  • Home
  • Reality Bytes
  • Engagement
  • Intern Insights

9/22/2020

Course: intro to AR

Read Now
 

google Coursera course: Introduction to augmented reality and ar core

Picture
We don't normally summarise course content.  Yet, this particular course offers a clear and accessible introduction to augmented reality technologies, so it seems helpful to flag it here.  The main points covered within the course are listed below:

The basics of augmented reality
  • Humankind’s first foray into immersive reality through a head-mounted display was the “Sword of Damocles,” created by Ivan Sutherland in 1968.
  • HMD is the acronym for “head-mounted display.”
  • The term “Augmented Reality” was coined by two Boeing researchers in 1992.
  • A standalone headset is a VR or AR headset that does not require external processors, memory, or power.
  • Through the combination of their hardware and software, many smartphones can view AR experiences that are less immersive than HMDs.
  • Many of the components in smartphones—gyroscopes, cameras, accelerometers, miniaturized high-resolution displays—are also necessary for AR and VR headsets.
  • The high demand for smartphones has driven the mass production of these components, resulting in greater hardware innovations and decreases in costs.
  • Project Tango was an early AR experiment from Google, utilizing a combination of custom software and hardware innovations that lead to a phone with depth-sensing cameras and powerful processors to enable high fidelity AR.
  • An evolution of Project Tango, ARCore is Google’s platform for building augmented reality experiences.

AR functionality
  • In order to seem real, an AR object has to act like its equivalent in the real world. Immersion is the sense that digital objects belong in the real world.
  • Breaking immersion means that the sense of realism has been broken; in AR this is usually by an object behaving in a way that does not match our expectations.
  • Placing is when the tracking of a digital object is fixed, or anchored, to a certain point in the real world.
  • Scaling is when a placed AR object changes size and/or dimension relative to the AR device's position. For example, when a user moves away or towards an AR object, it feels like the object is getting larger or smaller depending on the distance of the phone in relation to the object. AR objects further away from the phone look smaller and objects that are closer look larger. This should mimic the depth perception of human eyes.
  • Occlusion occurs when one object blocks another object from view.
  • AR software and hardware need to maintain “context awareness” by tracking the physical objects in any given space and understanding their relationships to each other -- i.e. which ones are taller, shorter, further away, etc.

Inside-out vs. outside-in tracking
  • There are two basic ways to track the position and orientation of a device or user: outside-in tracking and inside-out tracking.
  • Outside-in tracking uses external cameras or sensors to detect motion and track positioning. This method offers more precision tracking, but a drawback is the external sensors lower the portability.
  • Inside-out tracking uses cameras or sensors located within the device itself to track its position in the real world space. This method requires more hardware in the AR device, but offers more portability.
  • On the AR headset side, the Microsoft HoloLens is a device that uses inside-out tracking. On the VR headset side, the HTC Vive is a device that uses outside-in tracking.
  • On the AR mobile side, the Google Pixel is a smartphone that uses inside-out tracking for AR.

Fundamentals of ARCore
  • ARCore integrates virtual content with the real world as seen through your phone's camera and shown on your phone's display with technologies like motion tracking, environmental understanding, and light estimation.
  • Motion tracking uses your phone's camera, internal gyroscope, and accelerometer to estimate its pose in 3D space in real time.
  • Environmental understanding is the process by which ARCore “recognizes” objects in your environment and uses that information to properly place and orient digital objects. This allows the phone to detect the size and location of flat horizontal surfaces like the ground or a coffee table.
  • Light estimation in ARCore is a process that uses the phone’s cameras to determine how to realistically match the lighting of digital objects to the real world’s lighting, making them more believable within the augmented scene.
  • Feature points are visually distinct features in your environment, like the edge of a chair, a light switch on a wall, the corner of a rug, or anything else that is likely to stay visible and consistently placed in your environment.
  • Concurrent odometry and mapping (COM) is a motion tracking process for ARCore, and tracks the smartphone’s location in relation to its surrounding world.
  • Plane finding is the smartphone-specific process by which ARCore determines where surfaces are in your environment and uses those surfaces to place and orient digital objects. ARCore looks for clusters of feature points that appear to lie on common horizontal or vertical surfaces, like tables or walls, and makes these surfaces available to your app as planes. ARCore can also determine each plane's boundary and make that information available to your app. You can use this information to place virtual objects resting on flat surfaces.
  • Anchors “hold” the objects in their specified location after a user has placed them.
  • Motion tracking is not perfect. As you walk around, error, referred to as drift, may accumulate, and the device's pose may not reflect where you actually are. Anchors allow the underlying system to correct that error by indicating which points are important.

Constraints with current AR
  • Currently AR has a lack of user interface metaphors, meaning that a commonly understood method or language of human interaction has not been established.
  • The purpose of the interface metaphor is to give the user instantaneous knowledge about how to interact with the user interface. An example is a QWERTY keyboard or a computer mouse.
  • The details of what makes AR challenging from a technical standpoint are complex, but three influential factors are power, heat, and size.
  • AR requires high processing power, batteries generate heat, and a current challenge is fitting all the necessary components into a small enough form factor to wear on your face comfortably for extended periods of time.
  • Not everything in AR has to be 3D, but the vast majority of assets, applications, and experiences will require at least a little 3D design.
  • Currently, there is a limited base of people with 3D design and interaction skills, such as professional animators, graphic designers, mechanical engineers, or video game creators. For AR to grow, the adoption of 3D design theory, skills, and language needs to become much more widespread. Later on in this course, we’ll be discussing a few programs that are helping overcome this challenge, like Sceneform or Poly API.
  • Computer vision is a blend of artificial intelligence and computer science that aims to enable computers (like smartphones) to visually understand the surrounding world like human vision does. This technology needs to improve in terms of object detection and segmentation to make AR processes more effective.

Use cases and current powers/limitations of AR
  • ARCore can be used to create dynamic experiences for businesses, nonprofits, healthcare, schools, and more.
  • ARCore’s strengths are its phone-based spatial mapping capabilities and addressable user base. Approximately 85% of phones around the world run on the Android operating system.
  • At the beginning of 2018, ARCore is already available on 100 million Android-powered smartphones and that number continues growing. ARCore requires a lot of processing power, so not all older Android models have the necessary specifications yet. ARCore is also available in China.
  • Limitations to consider with contemporary AR technology include: low-light environments, a lack of featured surfaces, and the availability of powerful mobile processors in new phones.

Basic AR interaction options

1.     Drag and Drop
2.     Voice
3.     Tap
4.     Pinch and Zoom
5.     Slide
6.     Tilt

Think like a user
  • User flow is the journey of your app's users and how a person will engage, step by step, with your AR experience.
  • Planning your user flow needs to take into account the scene, the user interactions, any audio cues, and the final user actions.
  • A user flow can be created with simple sketches and panels all collected into one cohesive diagram.
  • UX and UI are complementary fields of product design, and generally speaking UX is the more technical of the two.
  • When considering UX/UI, one good rule of thumb to remember with AR is to avoid cluttering the screen with too many buttons or elements that might be confusing to users.
  • Choosing to use cartoonish designs or lighting can actually make the experience feel more realistic to the user, as opposed to photorealistic assets that fail to meet our expectations when they don't blend in perfectly with the real world.
  • Users might try to “break” your experience by deliberately disregarding your carefully planned user flow, but your resources are better spent on improving your app’s usability rather than trying to prevent bad actors

Next steps on the AR journey
  • Advanced 3D design tools like Maya, Zbrush, Blender, and 3ds Max are powerful professional tools.
  • Google’s Poly can be a good starting resource for building your first ARCore experience.
  • Poly by Google is a repository of 3D assets that can be quickly downloaded and used in your ARCore experience.
  • The recommended guide for your AR experience is a design document that contains all of the 3D assets, sounds, and other design ideas for your team to implement.
  • You may need to hire advanced personnel to help you build your experience, such as: 3D artists, texture designers, level designers, sound designers, or other professionals.

A closer look at mechanics of ARCore
  • Surface detection allows ARCore to place digital objects on various surface heights, to render different objects at different sizes and positions, and to create more realistic AR experiences in general.
  • Pose is the position and orientation of any object in relation to the world around it. Everything has its own unique pose: from your mobile device to the augmented 3D asset that you see on your display.
  • Hit-testing lets you establish a pose for virtual objects and is the next step in the ARCore user process after feature-tracking (finding stationary feature points that inform the environmental understanding of the device) and plane-finding (the smartphone-specific process by which ARCore determines where horizontal surfaces are in your environment).
  • Light estimation is a process that allows the phone to estimate the environment's current lighting conditions. ARCore is able to detect objects in suboptimal light and map a room successfully, but it’s important to note that there is a limit to how low the light can be for the experience to function.
  • Occlusion is when one 3D object blocks another 3D object. Currently this is only possible with digital objects, and AR objects cannot be occluded by a real world object. For example, in an AR game the digital object would not be able to behind a real couch in the real world.
  • Assets in multi-plane detection are scaled appropriately in relationship to the established planes, though only need to be placed on them (via anchor points) when it causes them to function like their real-world counterparts.
  • Immersion can be broken by users interacting with AR objects as if they were physically real. Framing can be used to combat these immersion-breaking interactions.
  • Spatial mapping is the ability to create a 3D map of the environment and helps establish where assets can be placed.
  • Feature points are stationary and are used to further environmental understanding and place planes in an experience. ARCore assumes planes are unmoving, so it is inadvisable to attempt to anchor a digital object to a real world object that is in motion. In general, it’s best not to place an object until the room has been sufficiently mapped and static surfaces have been recognized and designated as feature points.

Using Poly and Unity to create ARCore assets
  • Unity is a cross-platform game engine and development environment for both 3D and 2D interactive applications. It has a variety of tools, from the simple to the professionally complex, to allow for the streamlined creation of 3D objects and environments.
  • Poly toolkit for Unity is a plugin that allows you to import assets from Poly into Unity at edit time and at runtime.
  • Edit-time means manually downloading assets from Poly and importing them into your app's project while you are creating your app or experience.
  • Runtime means downloading assets from Poly when your app is running. This allows your app to leverage Poly's ever-expanding library of assets.


Even despite Google's undeniable motivation to promote its own products via this course, the course explanation of the above summary points is helpful and clear, enhanced by videos, diagrams and graphics.

Coursera course content is free for University students.  Certification is optional for an extra fee.  

VR, Google AR &. "Introduction to Augmented Reality & AR Core." Coursera, accessed 22.9.20. https://www.coursera.org/learn/ar.



Share

7/24/2020

visual storytelling with AR core

Read Now
 

Narrascope 2020: Visual storytelling in Immersive Reality by Matthew Roth.



My name is Matthew. I'm a UX writer at Google Daydream and this presentation is about how to tell stories as a developer, as a designer, as someone with a pretty cool tool using the techniques at your disposal.
 
My interview with google was literally the day that Google Assistant was announced. I was hired to make video games without video and at the time, we had no idea what that meant.

These days, I am working in almost the exact opposite media, immersive computing. It's virtual reality, augmented reality, anything you can see that isn't really there. Ideally, the written word will intrude as little as possible.
 
But our goal is the same: To take the user through an action experience as naturally as we can, and to have our users spend the minimum time thinking about the medium that the experience takes place in, and the most time being in that experience and participating in it and interacting with it.
 
When you play a game, you experience the story by telling it to yourself.

Any game gives you tools, a weapon a spell book, the ability to make monsters vanish by jumping on their heads. And by picking up these things, or casting them or jumping on their heads, you're telling your own story within the boundaries imposed on you by the master storyteller, the game designer.
 
AR gives you a whole collection of game mechanics, constructing, crafting, discovery and more. AR core is Google's engine for running AR, which appears on most of the latest generations of Android phones. Apple has a similar engine called AR kit. And there are a few others too. They have different features and annoyances. But basically, what they do is they look at what your camera is seeing at any given moment (plus the moments just before the moments after), and they extrapolate all this information to put together a picture of the world around you in 3D. Augmented reality is a way to see virtual content in the real world. The way I explain it to my mom is VR is real if you are in an imaginary world, ar is imaginary objects in the real world. AR core is a platform for building AR apps on Android phones. It keeps track of three variables to construct these AR worlds. There's motion tracking, environmental understanding, and latest dimension.
 
Although these design principles relate to AR Core, they still apply to all sorts of tools.
 
GET TO KNOW THE TECHNOLOGY
​

I am always looking for ways to like dig into the technology because if I can make myself understand it, then I can really do anything with it. I start to figure out how the developers are seeing this world. And therefore instead of just being like, make this happen, or can you make this happen, I'm like, hey, can you use these tools to make this happen this way?

MAKE ON-BOARDING PART OF THE NARRATIVE

Once you've established your place, you need to convince the user to detect the real world surroundings, in order for the phone to calibrate its whereabouts. That creates an intrinsic delay that's built in to mobile AR, so we need to give the user a mission. We need to give the user an excuse to discover the world and physically move the phone around. So we give them a fetch quest, which are normally done badly. Because fetch quests so often fill in the dead space between fighting, it's like you're saying to the player, the thing you usually do is run around and fight people. Now let's get rid of the fighting people part and just like run around for no good reason we've effectively freemium qualified them.
 
In the user journey that running around and the technical functionality of scanning for surfaces are exactly the same. So the idea of discovery is at the core of the experience, so we want to embrace that moment and make the narrative wedded to that, not just have it be a do this so that you can play the game, but have that be the beginning of the game.
 
So we want to make it awesome.

These AR techniques that I'm talking about, like plane finding and setting up boundaries. They don't necessarily feel like storytelling techniques, but they are.
 
ADD DEVELOPMENT, NOT REPETITION

I'm going to tell you all about one of my favourite game designers, Aristotle. Aristotle created a storytelling structure that a lot of us still use today.  First, there's an inciting incident, an explosion of birth and death, like getting lifted up by a tornado and landing on a witch. Something happens that the main character is not necessarily in control of. Then the character reacts to it. They strap on the witch's slippers and follow the yellow brick road. That's the Act One climax. That's the hero's call to action. And then act two which is most of the experience. This is the middle hour and a half of a two hour movie.  In involves a series of events where the character faces challenges, each one gets harder than the last, and each one reveals a new part of the character, a part that the audience has never seen before. That's also why if battles are too repetitive, a game gets grindy. Since it's no longer revealing something new and unseen. You're not getting character development or character building or character amping up new swords.
 
Ideally each episode or interaction gets more challenging until the end of that when a character faces complete despair. It's the lowest moment, the dark night of the soul, you must fall completely before you get up again.
 
Blake Snyder, who wrote save the cat, which is this book that all these Hollywood writers use, has this thesis.  I don't know if I agree with it, but I've seen it used in such great ways.  The thesis is that, um, you always show an interaction with the main character at the very beginning of a movie or a game or whatever you're doing, where the character saves a cat, because even if they're an evil person, they've saved the cat so you identify with them. I don't know. Let's talk about that later.
 
But then we get to the moment where we need a complete reversal from the darkest night into the final blades of glory. So that's the story of every story. It's also the story of each moment of a video. The first moments of Pac Man are filled with Tension the frantic thoughts of escape of limited motion of being eaten I can only move left or right.
 
DESIGN FOR AWE AND RESPONSE
You want to design a world that's both amazing to look at, and one that reacts to your presence. Make your world real. On the screen, you're combining virtual and real world objects. So let them interact, the more they play together and roll off each other, either metaphorically or literally, the more your world will feel like an actual living inhabited place. And let players mess with it. The virtual and real world, the virtual world and the actual world are only two dimensions of the experience. In order to make it feel real, we need to add the third dimension, the user. Let people touch, manipulate and change as many virtual objects as you can add, as many as makes sense to be changed and manipulated.
 
Here's the biggest obstacle … Users can be in one of four positions, 1) either seated with your hands fixed, 2) seated with your hands moving around, 3) standing still with your hands fixed or 4) walking around and moving around in a real world space.
 
PHYSICALITY IS EMOTION
By designing the mechanics for your experience, you can change their physical position as well as their mental experience. In other words, you can blow their minds by blowing their bodies out of their seats.
 
One of the greatest advantages we have in AR is the size of our space, it's theoretically infinite. The problem is most users don't remember that they're stuck here, right? So you want to give them something to cheat, so they actually have to move the phone around. And that little nudging icon will help them do that.
 
Now I'm going to talk about what players can actually do in AR game mechanics.
 
Like every story, you can break it down every moment into a first, second and third act, a call to action, a hero's quest and a combination. You want to take it advantage of the real world environment, put things just out of reach to users and offer them rewards for moving around exploring.
 
Hidden bonus levels are at time honored tradition, finding them in your living room gives you an extra measure of delight.
 
When you place objects in your AR scene, users will want to play with them and the more non necessary stuff they can pick up and play with the more than one a hunt for the objects that they do need to use.

But you want to give breadcrumbs too.  You can break reality selectively. 
 
SETTING IS A STORYTELLING TOOL
But as a world builder, you're empowered to decide when you want it to be realistic, and when you want to withhold that realism. If you need to draw the user’s attention to an object or an area or an evil robot, the entire world is at your command. You have lighting, shading, texture and physics at your disposal. You can highlight things you can play things down and move them into the shadows. 
 
MOTION IS EMOTION
We can use a single effect like jump scares to achieve a bunch of different emotional goals. The best jump scares, all the best moments of connection happen when you forget there's a screen separating you from the movie from the game.
 
Having the action right in front of you makes that separation even easier to forget. It's in my room. It's on my bed. It's right in front of me and it's happening with me. In one morning, you just don't want to make the user move backward without looking behind them. Because that can have real world disastrous effects. At some point in the history of games, sneaking past enemies was just a way not to get killed. At some point the game designer started recording your visibility percentage lines of sight and sneaking became an actual measurable mechanic. Just think of how recording everything in the real world can let us change enemy AI.
 
USE SURPRISING INPUTS
If your games input is the camera, then let anything you see be the input trigger, we can record degrees of light. And light is especially useful because it's so easy to manipulate when you are indoors anyway. And because it's so unexpected to the user, you can see that when the light is switched off here, cars turned on their lights and buildings laid out, like at night.
 
But it's also something that the user just didn't expect, but we have it completely at our disposal. Here’s a game that’s really cool. It's really simple. The first player chooses a spot to bury the treasure. You tap anywhere and the game buries it. Then you hand your phone to the next player and they dig around until they find it.
 
When you create or play in the world, you always have the potential to interact with other users. Cloud anchors make this uniquely possible by matching virtual content with real world locations, then serving the same content to different users. All you need is another person and a path to the shared world. This game is played on two phones and both phones are sensing in the same environment. There are techniques to do that I will show you how to find them.
 
 
CREATE MANY DIFFERENT PATHS TO ENGAGEMENT
In some way, AR is a great way to create an all access role for differently abled users to see things in their own scale. However, it comes with a whole set of new challenges. If you tell users to reach up and grab something or take two steps forward, what happens when your user can't reach the device or take steps?

Here, we've added in an alternate way for users to reach faraway objects. There's a reticle that stretches and extends based on the angle of your phone. This is a very Googley concept. This is something that we keep talking about having many paths to success. People who like keyboard shortcuts, vs people who like mouse interactions, they're people who want to do things the slow way. We create different paths to success. 

Roth, Matthew. 2020. VISUAL STORYTELLING IN IMMERSIVE REALITY edited by Narrascope 2020. U.S.: YouTube.


​


Share

Details

    Author

    The USW Audience of the Future research team is compiling a summary collection of recent research in the field of immersive, and enhanced reality media

    Archives

    October 2020
    September 2020
    August 2020
    July 2020
    June 2020
    May 2020
    April 2020
    March 2020
    February 2020
    January 2020
    December 2019

    Categories

    All
    Accessibility
    Accessible
    Accessible Media
    AI
    Alternate Reality Game
    Analytics
    Applications
    App Store
    AR Core
    AR Design
    AR Glasses
    AR UI
    Audience Journey
    Audience Journey Research
    Audience Of The Future
    Audience Research
    Augmented Fiction
    Augmented Reality
    Augmented User Interface
    Awe
    Chatbots
    Children
    Choose Your Own Adventure
    Dialogue Systems
    Digital Design
    Digital Fiction
    Digital Heritage
    Disney
    E-books
    Embodied Interaction
    Embodiment
    Escape Room
    Escape Rooms
    Experience
    Experience Creation
    Experimental Games
    Extended Reality
    Fiction Engineering
    Formats
    Full Motion Video Games
    Game Writing
    Ghost Stories
    GPS
    Heritage
    Heuristics
    Honeypot Effect
    Horror
    I-docs
    Immersion
    Immersive
    Immersive Design
    Immersive Heritage
    Immersive Storytelling
    Immersive UI
    Inclusive Design
    Inclusivity
    Interactive
    Interactive Factual
    Interactive Fiction
    Interactive Movies
    Interactive Narrative
    Interactive Stories
    Interactive Storytelling
    IOT
    LARPS
    Location Based Games
    Locative Media Technology
    Mixed Reality
    MMOs
    Mobile Games
    Mobile Phone
    Mobile Storytelling
    MR
    Multi-player
    Narrascope
    Non-verbal Interactions
    Para-social
    Participatory Urbanism
    Physical Interaction
    Pokemon
    Pokemon Go
    Puzzle
    Ralph Koster
    Social
    Social Game-play
    Social Worlds
    Spatial Interface
    Story
    Story Games
    Strong Concepts
    Tabletop
    Technology Acceptance
    Theme Parks
    Tools
    Tourism
    Tourist
    Ubicomp
    Ultima Online
    Unreal
    User Experience Design Guide
    User-experience Design Guide
    UX
    Virtual Reality
    Virtual Reality Non-fiction
    Virtual Reality UI
    Virtual Worlds
    Visitor Experience
    VRNF
    Walking Simulators
    Wandering Games
    Writing Augmented Reality
    Writing Virtual Reality

    RSS Feed

Proudly powered by Weebly
  • Home
  • Reality Bytes
  • Engagement
  • Intern Insights