CO-CREATING THE AUDIENCE OF THE FUTURE
  • Home
  • Reality Bytes
  • Engagement
  • Intern Insights

10/5/2020

Why play mobile AR games?

Read Now
 

Why do people play location-based augmented reality games: A study on Pokémon GO

This article reviews findings from a survey of over 2, 000 Pokémon GO players regarding their motivations to play the game.  

Earlier experiences, especially with the same franchise, social influence, and popularity were the most common reasons to adopt the game, while progressing in the game was the most frequently reported reason to continue playing.  The player's personal situation outside the game and playability problems were the most significant reasons to quit the game.

The Pokémon GO brand is widely known, it has nostalgic value, and its characters are simple and attractive even if one is unfamiliar with them. The “gotta catch ‘em all” theme of Pokémon is well suited for a location-based game where the player can go to different places to find and catch different creatures.

Reasons to start playing Pokémon GO

PREVIOUS EXPERIENCE
As many as 43.9% of the respondents reported experience with fandom for similar types of games or hobbies as a reason to pick up the game. Out of these, experience with Pokémon was by far the most frequent reason to start playing, mentioned by 39.6% of the respondents. The idea of the game brought up nostalgic feelings of childhood moments playing Pokémon. Some dreamt of being a Pokémon trainer as a child, and the game felt as the closest thing to fulfill that dream. In a smaller margin were previous experiences with geocaching, Ingress (Niantic, 2013) or other location-based games, or playing games in general.

SOCIAL INFLUENCE
Parents mentioned either wanting to be more informed about their children's activities or wanting something common to do together with them. Similarly, a friend's or a partner's recommendations or wanting to spend time with them while playing were reported.

POPULARITY 
The hype around the game and the visibility of the players had a major effect

POSITIVE CHARACTERISTICS
Physical exercise and spending time outdoors while playing were appealing. In addition, the respondents liked the idea of being encouraged to explore their surroundings and new areas.

NOVEL TECHNOLOGY
Location-based characteristics or AR, was a reason to try the game.

SITUATION and CONVENIENCE
Wanting something fun to do while doing other less interesting activities, or having a conveniently located PokéStop nearby. Some mentioned having a new phone, which made trying the game out convenient. The game being free and good weather were also mentioned.

GENERAL KNOWLEDGE 
People stated they picked up the game because they wanted to keep up with the times.

SOCIAL FEATURES
The general sociability of the game, liking to compete or wanting to help others were brought up. Some felt that playing would be a good opportunity to meet new people, even potential partners.

GAME MECHANICS 
Looking for, hunting, and collecting Pokémon was fun. The “treasure hunt” like gameplay was seen as exciting.

THE NATURE OF THE GAME
Being casual enough and having easy access, was appealing.

Reasons to continue playing Pokémon GO

PROGRESSION  
The most common individual reason to keep on playing was collecting Pokémon.  Achieving personal goals, the joy of discovery, and the general feel of advancement.

POSITIVE ASPECTS
Again, exercise and outdoor activities interested the players, and having a reason to go out and walk was motivating.

SOCIAL FEATURES 
Whether wanting to meet new people while playing or playing together with friends or family. The game functioned as an easy way to connect people together and create a feel of community.

SOCIAL INFLUENCE
This could mean parents wanting to be up to date and informed about their children's hobby or avoiding being left out of social circles when all friends were still playing the game.

INTEREST 
The game continued to feel interesting or fun.

FUTURE EXPECTATIONS 
Some players were curious about how the game was going to change or waiting for a specific update.

THE NATURE OF THE GAME 
The casual nature of the game, making it easy to play, while others felt that the challenging nature was positive. The game provided surprises and was rewarding.

While previous experiences, especially with the Pokémon brand, were brought up as the number one reason to start the game, they were rarely mentioned as the reason to continue playing.

Only a few respondents reported technology related reasons to continue playing, for instance liking the location-based properties or the AR features.

Reasons to stop playing Pokémon GO

SITUATION 
Getting bored, a lack of time or money, poor or cold weather, and health problems were mentioned, while some had quit due to their phone breaking or the game not working where they lived. Some had achieved their goal and had thus decided to quit, while others felt the hype was settling down.

PROGRESSION 
The leveling curve was seen to be too steep: the required experience points needed for a new level rose exponentially, while the earned experience points stayed the same, making it necessary to grind to advance. Similarly, when reaching a certain point in collecting the Pokémon, it became increasingly hard to find any new ones to advance towards the goal of catching them all.

FUNCTIONALITY PROBLEMS
Bugs, the game crashing or not registering the walked distances properly were mentioned. The respondents criticized the unequal gaming possibilities due to the Pokémon and PokéStops being concentrated to city centers. In addition, some disliked that you needed to keep the game active at all times even when playing passively. This caused the battery to drain.  

​SHALLOW
The shortcomings of the game, especially the lack of content, were seen to be problematic. Some players would have wanted more features or more Pokémon.

CHANGE
Sometimes players felt that the game was changing for the worse. For instance, the removal of the nearby feature, which had made locating Pokémon easier made some to stop playing.

BAD REPUTATION
Niantic was criticized for their lack of communication to the public, and even claims of not seeing them as trustworthy arose.

SOCIAL INFLUENCE
If friends no longer played the game, some respondents explained not feeling like continuing the game alone. Other people could have a negative influence, for instance by cheating.

Alha, K., Koskinen, E., Paavilainen, J. and Hamari, J., 2019. Why do people play location-based augmented reality games: A study on Pokémon GO. Computers in Human Behavior, 93, pp.114-122.






















Share

9/22/2020

Course: intro to AR

Read Now
 

google Coursera course: Introduction to augmented reality and ar core

Picture
We don't normally summarise course content.  Yet, this particular course offers a clear and accessible introduction to augmented reality technologies, so it seems helpful to flag it here.  The main points covered within the course are listed below:

The basics of augmented reality
  • Humankind’s first foray into immersive reality through a head-mounted display was the “Sword of Damocles,” created by Ivan Sutherland in 1968.
  • HMD is the acronym for “head-mounted display.”
  • The term “Augmented Reality” was coined by two Boeing researchers in 1992.
  • A standalone headset is a VR or AR headset that does not require external processors, memory, or power.
  • Through the combination of their hardware and software, many smartphones can view AR experiences that are less immersive than HMDs.
  • Many of the components in smartphones—gyroscopes, cameras, accelerometers, miniaturized high-resolution displays—are also necessary for AR and VR headsets.
  • The high demand for smartphones has driven the mass production of these components, resulting in greater hardware innovations and decreases in costs.
  • Project Tango was an early AR experiment from Google, utilizing a combination of custom software and hardware innovations that lead to a phone with depth-sensing cameras and powerful processors to enable high fidelity AR.
  • An evolution of Project Tango, ARCore is Google’s platform for building augmented reality experiences.

AR functionality
  • In order to seem real, an AR object has to act like its equivalent in the real world. Immersion is the sense that digital objects belong in the real world.
  • Breaking immersion means that the sense of realism has been broken; in AR this is usually by an object behaving in a way that does not match our expectations.
  • Placing is when the tracking of a digital object is fixed, or anchored, to a certain point in the real world.
  • Scaling is when a placed AR object changes size and/or dimension relative to the AR device's position. For example, when a user moves away or towards an AR object, it feels like the object is getting larger or smaller depending on the distance of the phone in relation to the object. AR objects further away from the phone look smaller and objects that are closer look larger. This should mimic the depth perception of human eyes.
  • Occlusion occurs when one object blocks another object from view.
  • AR software and hardware need to maintain “context awareness” by tracking the physical objects in any given space and understanding their relationships to each other -- i.e. which ones are taller, shorter, further away, etc.

Inside-out vs. outside-in tracking
  • There are two basic ways to track the position and orientation of a device or user: outside-in tracking and inside-out tracking.
  • Outside-in tracking uses external cameras or sensors to detect motion and track positioning. This method offers more precision tracking, but a drawback is the external sensors lower the portability.
  • Inside-out tracking uses cameras or sensors located within the device itself to track its position in the real world space. This method requires more hardware in the AR device, but offers more portability.
  • On the AR headset side, the Microsoft HoloLens is a device that uses inside-out tracking. On the VR headset side, the HTC Vive is a device that uses outside-in tracking.
  • On the AR mobile side, the Google Pixel is a smartphone that uses inside-out tracking for AR.

Fundamentals of ARCore
  • ARCore integrates virtual content with the real world as seen through your phone's camera and shown on your phone's display with technologies like motion tracking, environmental understanding, and light estimation.
  • Motion tracking uses your phone's camera, internal gyroscope, and accelerometer to estimate its pose in 3D space in real time.
  • Environmental understanding is the process by which ARCore “recognizes” objects in your environment and uses that information to properly place and orient digital objects. This allows the phone to detect the size and location of flat horizontal surfaces like the ground or a coffee table.
  • Light estimation in ARCore is a process that uses the phone’s cameras to determine how to realistically match the lighting of digital objects to the real world’s lighting, making them more believable within the augmented scene.
  • Feature points are visually distinct features in your environment, like the edge of a chair, a light switch on a wall, the corner of a rug, or anything else that is likely to stay visible and consistently placed in your environment.
  • Concurrent odometry and mapping (COM) is a motion tracking process for ARCore, and tracks the smartphone’s location in relation to its surrounding world.
  • Plane finding is the smartphone-specific process by which ARCore determines where surfaces are in your environment and uses those surfaces to place and orient digital objects. ARCore looks for clusters of feature points that appear to lie on common horizontal or vertical surfaces, like tables or walls, and makes these surfaces available to your app as planes. ARCore can also determine each plane's boundary and make that information available to your app. You can use this information to place virtual objects resting on flat surfaces.
  • Anchors “hold” the objects in their specified location after a user has placed them.
  • Motion tracking is not perfect. As you walk around, error, referred to as drift, may accumulate, and the device's pose may not reflect where you actually are. Anchors allow the underlying system to correct that error by indicating which points are important.

Constraints with current AR
  • Currently AR has a lack of user interface metaphors, meaning that a commonly understood method or language of human interaction has not been established.
  • The purpose of the interface metaphor is to give the user instantaneous knowledge about how to interact with the user interface. An example is a QWERTY keyboard or a computer mouse.
  • The details of what makes AR challenging from a technical standpoint are complex, but three influential factors are power, heat, and size.
  • AR requires high processing power, batteries generate heat, and a current challenge is fitting all the necessary components into a small enough form factor to wear on your face comfortably for extended periods of time.
  • Not everything in AR has to be 3D, but the vast majority of assets, applications, and experiences will require at least a little 3D design.
  • Currently, there is a limited base of people with 3D design and interaction skills, such as professional animators, graphic designers, mechanical engineers, or video game creators. For AR to grow, the adoption of 3D design theory, skills, and language needs to become much more widespread. Later on in this course, we’ll be discussing a few programs that are helping overcome this challenge, like Sceneform or Poly API.
  • Computer vision is a blend of artificial intelligence and computer science that aims to enable computers (like smartphones) to visually understand the surrounding world like human vision does. This technology needs to improve in terms of object detection and segmentation to make AR processes more effective.

Use cases and current powers/limitations of AR
  • ARCore can be used to create dynamic experiences for businesses, nonprofits, healthcare, schools, and more.
  • ARCore’s strengths are its phone-based spatial mapping capabilities and addressable user base. Approximately 85% of phones around the world run on the Android operating system.
  • At the beginning of 2018, ARCore is already available on 100 million Android-powered smartphones and that number continues growing. ARCore requires a lot of processing power, so not all older Android models have the necessary specifications yet. ARCore is also available in China.
  • Limitations to consider with contemporary AR technology include: low-light environments, a lack of featured surfaces, and the availability of powerful mobile processors in new phones.

Basic AR interaction options

1.     Drag and Drop
2.     Voice
3.     Tap
4.     Pinch and Zoom
5.     Slide
6.     Tilt

Think like a user
  • User flow is the journey of your app's users and how a person will engage, step by step, with your AR experience.
  • Planning your user flow needs to take into account the scene, the user interactions, any audio cues, and the final user actions.
  • A user flow can be created with simple sketches and panels all collected into one cohesive diagram.
  • UX and UI are complementary fields of product design, and generally speaking UX is the more technical of the two.
  • When considering UX/UI, one good rule of thumb to remember with AR is to avoid cluttering the screen with too many buttons or elements that might be confusing to users.
  • Choosing to use cartoonish designs or lighting can actually make the experience feel more realistic to the user, as opposed to photorealistic assets that fail to meet our expectations when they don't blend in perfectly with the real world.
  • Users might try to “break” your experience by deliberately disregarding your carefully planned user flow, but your resources are better spent on improving your app’s usability rather than trying to prevent bad actors

Next steps on the AR journey
  • Advanced 3D design tools like Maya, Zbrush, Blender, and 3ds Max are powerful professional tools.
  • Google’s Poly can be a good starting resource for building your first ARCore experience.
  • Poly by Google is a repository of 3D assets that can be quickly downloaded and used in your ARCore experience.
  • The recommended guide for your AR experience is a design document that contains all of the 3D assets, sounds, and other design ideas for your team to implement.
  • You may need to hire advanced personnel to help you build your experience, such as: 3D artists, texture designers, level designers, sound designers, or other professionals.

A closer look at mechanics of ARCore
  • Surface detection allows ARCore to place digital objects on various surface heights, to render different objects at different sizes and positions, and to create more realistic AR experiences in general.
  • Pose is the position and orientation of any object in relation to the world around it. Everything has its own unique pose: from your mobile device to the augmented 3D asset that you see on your display.
  • Hit-testing lets you establish a pose for virtual objects and is the next step in the ARCore user process after feature-tracking (finding stationary feature points that inform the environmental understanding of the device) and plane-finding (the smartphone-specific process by which ARCore determines where horizontal surfaces are in your environment).
  • Light estimation is a process that allows the phone to estimate the environment's current lighting conditions. ARCore is able to detect objects in suboptimal light and map a room successfully, but it’s important to note that there is a limit to how low the light can be for the experience to function.
  • Occlusion is when one 3D object blocks another 3D object. Currently this is only possible with digital objects, and AR objects cannot be occluded by a real world object. For example, in an AR game the digital object would not be able to behind a real couch in the real world.
  • Assets in multi-plane detection are scaled appropriately in relationship to the established planes, though only need to be placed on them (via anchor points) when it causes them to function like their real-world counterparts.
  • Immersion can be broken by users interacting with AR objects as if they were physically real. Framing can be used to combat these immersion-breaking interactions.
  • Spatial mapping is the ability to create a 3D map of the environment and helps establish where assets can be placed.
  • Feature points are stationary and are used to further environmental understanding and place planes in an experience. ARCore assumes planes are unmoving, so it is inadvisable to attempt to anchor a digital object to a real world object that is in motion. In general, it’s best not to place an object until the room has been sufficiently mapped and static surfaces have been recognized and designated as feature points.

Using Poly and Unity to create ARCore assets
  • Unity is a cross-platform game engine and development environment for both 3D and 2D interactive applications. It has a variety of tools, from the simple to the professionally complex, to allow for the streamlined creation of 3D objects and environments.
  • Poly toolkit for Unity is a plugin that allows you to import assets from Poly into Unity at edit time and at runtime.
  • Edit-time means manually downloading assets from Poly and importing them into your app's project while you are creating your app or experience.
  • Runtime means downloading assets from Poly when your app is running. This allows your app to leverage Poly's ever-expanding library of assets.


Even despite Google's undeniable motivation to promote its own products via this course, the course explanation of the above summary points is helpful and clear, enhanced by videos, diagrams and graphics.

Coursera course content is free for University students.  Certification is optional for an extra fee.  

VR, Google AR &. "Introduction to Augmented Reality & AR Core." Coursera, accessed 22.9.20. https://www.coursera.org/learn/ar.



Share

7/24/2020

visual storytelling with AR core

Read Now
 

Narrascope 2020: Visual storytelling in Immersive Reality by Matthew Roth.



My name is Matthew. I'm a UX writer at Google Daydream and this presentation is about how to tell stories as a developer, as a designer, as someone with a pretty cool tool using the techniques at your disposal.
 
My interview with google was literally the day that Google Assistant was announced. I was hired to make video games without video and at the time, we had no idea what that meant.

These days, I am working in almost the exact opposite media, immersive computing. It's virtual reality, augmented reality, anything you can see that isn't really there. Ideally, the written word will intrude as little as possible.
 
But our goal is the same: To take the user through an action experience as naturally as we can, and to have our users spend the minimum time thinking about the medium that the experience takes place in, and the most time being in that experience and participating in it and interacting with it.
 
When you play a game, you experience the story by telling it to yourself.

Any game gives you tools, a weapon a spell book, the ability to make monsters vanish by jumping on their heads. And by picking up these things, or casting them or jumping on their heads, you're telling your own story within the boundaries imposed on you by the master storyteller, the game designer.
 
AR gives you a whole collection of game mechanics, constructing, crafting, discovery and more. AR core is Google's engine for running AR, which appears on most of the latest generations of Android phones. Apple has a similar engine called AR kit. And there are a few others too. They have different features and annoyances. But basically, what they do is they look at what your camera is seeing at any given moment (plus the moments just before the moments after), and they extrapolate all this information to put together a picture of the world around you in 3D. Augmented reality is a way to see virtual content in the real world. The way I explain it to my mom is VR is real if you are in an imaginary world, ar is imaginary objects in the real world. AR core is a platform for building AR apps on Android phones. It keeps track of three variables to construct these AR worlds. There's motion tracking, environmental understanding, and latest dimension.
 
Although these design principles relate to AR Core, they still apply to all sorts of tools.
 
GET TO KNOW THE TECHNOLOGY
​

I am always looking for ways to like dig into the technology because if I can make myself understand it, then I can really do anything with it. I start to figure out how the developers are seeing this world. And therefore instead of just being like, make this happen, or can you make this happen, I'm like, hey, can you use these tools to make this happen this way?

MAKE ON-BOARDING PART OF THE NARRATIVE

Once you've established your place, you need to convince the user to detect the real world surroundings, in order for the phone to calibrate its whereabouts. That creates an intrinsic delay that's built in to mobile AR, so we need to give the user a mission. We need to give the user an excuse to discover the world and physically move the phone around. So we give them a fetch quest, which are normally done badly. Because fetch quests so often fill in the dead space between fighting, it's like you're saying to the player, the thing you usually do is run around and fight people. Now let's get rid of the fighting people part and just like run around for no good reason we've effectively freemium qualified them.
 
In the user journey that running around and the technical functionality of scanning for surfaces are exactly the same. So the idea of discovery is at the core of the experience, so we want to embrace that moment and make the narrative wedded to that, not just have it be a do this so that you can play the game, but have that be the beginning of the game.
 
So we want to make it awesome.

These AR techniques that I'm talking about, like plane finding and setting up boundaries. They don't necessarily feel like storytelling techniques, but they are.
 
ADD DEVELOPMENT, NOT REPETITION

I'm going to tell you all about one of my favourite game designers, Aristotle. Aristotle created a storytelling structure that a lot of us still use today.  First, there's an inciting incident, an explosion of birth and death, like getting lifted up by a tornado and landing on a witch. Something happens that the main character is not necessarily in control of. Then the character reacts to it. They strap on the witch's slippers and follow the yellow brick road. That's the Act One climax. That's the hero's call to action. And then act two which is most of the experience. This is the middle hour and a half of a two hour movie.  In involves a series of events where the character faces challenges, each one gets harder than the last, and each one reveals a new part of the character, a part that the audience has never seen before. That's also why if battles are too repetitive, a game gets grindy. Since it's no longer revealing something new and unseen. You're not getting character development or character building or character amping up new swords.
 
Ideally each episode or interaction gets more challenging until the end of that when a character faces complete despair. It's the lowest moment, the dark night of the soul, you must fall completely before you get up again.
 
Blake Snyder, who wrote save the cat, which is this book that all these Hollywood writers use, has this thesis.  I don't know if I agree with it, but I've seen it used in such great ways.  The thesis is that, um, you always show an interaction with the main character at the very beginning of a movie or a game or whatever you're doing, where the character saves a cat, because even if they're an evil person, they've saved the cat so you identify with them. I don't know. Let's talk about that later.
 
But then we get to the moment where we need a complete reversal from the darkest night into the final blades of glory. So that's the story of every story. It's also the story of each moment of a video. The first moments of Pac Man are filled with Tension the frantic thoughts of escape of limited motion of being eaten I can only move left or right.
 
DESIGN FOR AWE AND RESPONSE
You want to design a world that's both amazing to look at, and one that reacts to your presence. Make your world real. On the screen, you're combining virtual and real world objects. So let them interact, the more they play together and roll off each other, either metaphorically or literally, the more your world will feel like an actual living inhabited place. And let players mess with it. The virtual and real world, the virtual world and the actual world are only two dimensions of the experience. In order to make it feel real, we need to add the third dimension, the user. Let people touch, manipulate and change as many virtual objects as you can add, as many as makes sense to be changed and manipulated.
 
Here's the biggest obstacle … Users can be in one of four positions, 1) either seated with your hands fixed, 2) seated with your hands moving around, 3) standing still with your hands fixed or 4) walking around and moving around in a real world space.
 
PHYSICALITY IS EMOTION
By designing the mechanics for your experience, you can change their physical position as well as their mental experience. In other words, you can blow their minds by blowing their bodies out of their seats.
 
One of the greatest advantages we have in AR is the size of our space, it's theoretically infinite. The problem is most users don't remember that they're stuck here, right? So you want to give them something to cheat, so they actually have to move the phone around. And that little nudging icon will help them do that.
 
Now I'm going to talk about what players can actually do in AR game mechanics.
 
Like every story, you can break it down every moment into a first, second and third act, a call to action, a hero's quest and a combination. You want to take it advantage of the real world environment, put things just out of reach to users and offer them rewards for moving around exploring.
 
Hidden bonus levels are at time honored tradition, finding them in your living room gives you an extra measure of delight.
 
When you place objects in your AR scene, users will want to play with them and the more non necessary stuff they can pick up and play with the more than one a hunt for the objects that they do need to use.

But you want to give breadcrumbs too.  You can break reality selectively. 
 
SETTING IS A STORYTELLING TOOL
But as a world builder, you're empowered to decide when you want it to be realistic, and when you want to withhold that realism. If you need to draw the user’s attention to an object or an area or an evil robot, the entire world is at your command. You have lighting, shading, texture and physics at your disposal. You can highlight things you can play things down and move them into the shadows. 
 
MOTION IS EMOTION
We can use a single effect like jump scares to achieve a bunch of different emotional goals. The best jump scares, all the best moments of connection happen when you forget there's a screen separating you from the movie from the game.
 
Having the action right in front of you makes that separation even easier to forget. It's in my room. It's on my bed. It's right in front of me and it's happening with me. In one morning, you just don't want to make the user move backward without looking behind them. Because that can have real world disastrous effects. At some point in the history of games, sneaking past enemies was just a way not to get killed. At some point the game designer started recording your visibility percentage lines of sight and sneaking became an actual measurable mechanic. Just think of how recording everything in the real world can let us change enemy AI.
 
USE SURPRISING INPUTS
If your games input is the camera, then let anything you see be the input trigger, we can record degrees of light. And light is especially useful because it's so easy to manipulate when you are indoors anyway. And because it's so unexpected to the user, you can see that when the light is switched off here, cars turned on their lights and buildings laid out, like at night.
 
But it's also something that the user just didn't expect, but we have it completely at our disposal. Here’s a game that’s really cool. It's really simple. The first player chooses a spot to bury the treasure. You tap anywhere and the game buries it. Then you hand your phone to the next player and they dig around until they find it.
 
When you create or play in the world, you always have the potential to interact with other users. Cloud anchors make this uniquely possible by matching virtual content with real world locations, then serving the same content to different users. All you need is another person and a path to the shared world. This game is played on two phones and both phones are sensing in the same environment. There are techniques to do that I will show you how to find them.
 
 
CREATE MANY DIFFERENT PATHS TO ENGAGEMENT
In some way, AR is a great way to create an all access role for differently abled users to see things in their own scale. However, it comes with a whole set of new challenges. If you tell users to reach up and grab something or take two steps forward, what happens when your user can't reach the device or take steps?

Here, we've added in an alternate way for users to reach faraway objects. There's a reticle that stretches and extends based on the angle of your phone. This is a very Googley concept. This is something that we keep talking about having many paths to success. People who like keyboard shortcuts, vs people who like mouse interactions, they're people who want to do things the slow way. We create different paths to success. 

Roth, Matthew. 2020. VISUAL STORYTELLING IN IMMERSIVE REALITY edited by Narrascope 2020. U.S.: YouTube.


​


Share

7/1/2020

AR DESIGN GUIDELINES

Read Now
 

AUGMENTED REALITY DESIGN HEURISTICS: DESIGNING FOR DYNAMIC INTERACTIONS

Augmented Reality (AR) poses a number of challenges for designers.   It's still new and doesn't have an established best interaction practice, so users as well as designers can sometimes find it confusing to work with.  Unlike the clearly defined boundaries of desktop screen space, AR spaces are implemented within and also reliant upon real physical environments, which makes them dynamic and variable.  This complicates things like positioning, attention direction, as well as collaborative interactions and even research evaluation as well.

To help designers solve these challenges the researchers developed 9 Design Heuristics (which is a term designers use for guidelines, or shortcuts) 

1. Fit with user environment and task. 
AR experiences should use visualizations and metaphors that have meaning within the physical and task environment in which they are presented. The choice of visualizations & metaphors should match the mental models that the user will have based on their physical environment and task.
​
2. Form communicates function.
​The form of a virtual element should rely on existing metaphors that the user will know in order to communicate affordances and capabilities.

3. Minimize distraction and overload. 
AR experiences can easily become visually overwhelming. Designs should work to minimize accidental distraction due to designs that are overly cluttered, busy, and/or movement filled.

4. Adaptation to user position and motion.
The system should adapt such that virtual elements are useful and usable from the variety of viewing angles, distances, and movements that will be taken by the user.

5. Alignment of physical and virtual worlds.
Placement of virtual elements should make sense in the physical environment. If virtual elements are aligned with physical objects, this alignment should be continuous over time and viewing perspectives.

6. Fit with user’s physical abilities. 
Interaction with AR experiences should not require the user to perform actions that are physically challenging, dangerous, or that require excess amounts of coordination. All physical motion required should be easy.

7. Fit with user’s perceptual abilities. 
AR experiences should not present information in ways that fall outside of an intended user's perceptual thresholds. Designers should consider size, color, motion, distance, and resolution when designing for AR.
​
8. Accessibility of off screen objects. 
Interfaces that require direct manipulation (for example, AR & touch screens) should make it easy for users to find or recall the items they need to manipulate when those items are outside the field of view.

9. Accounting for hardware capabilities. 
AR experiences should be designed to accommodate for the capabilities & limitations of the hardware platform.

These guidelines were developed through a rigorous selection and testing process that began by sourcing existing guidelines from an extensive literature review (see table below)


Picture
These heuristics were then mapped thematically, and those groupings were evaluated in the 1st instance by 3 design experts.  Following further adjustments in response to this 1st round of feedback the heuristic themes were re-evaluated, this time by 5 experts, to identify any doubling up, or lack of relevance.  The heuristics (14 at this point) were then tested in practice during the design of two AR applications.  The experiential insight gained through application revealed a few more inconsistencies and overlap, which inspired the researchers to streamline them further into the 9 heuristics presented above, which also underwent a 3rd round of evaluation by 5 expert reviewers to assess inter-item consistency and inter-rater reliability before being finalised.  

Endsley, T.C., Sprehn, K.A., Brill, R.M., Ryan, K.J., Vincent, E.C. and Martin, J.M., 2017, September. Augmented reality design heuristics: Designing for dynamic interactions. In Proceedings of the Human Factors and Ergonomics Society Annual Meeting (Vol. 61, No. 1, pp. 2100-2104). Sage CA: Los Angeles, CA: SAGE Publications.

Share

5/6/2020

Pokémon Go: PAST AND FUTURE TIPS

Read Now
 

​The State & Future of AR Games: Rose-Colored Glasses

In this 2019 GDC talk, Niantic CEO John Hanke takes a deep look at AR today and helps you imagine possible AR games and experiences that can deliver persistent shared experiences in the real world. 

His big tip?  AR GLASSES 

I am a big fan of a quote by Alan Kay, which is the best way to predict the future is to invent it, or to build it.
 
I know some people know us as the company that made Pokémon Go with a company that spun out of Google, which is cool. That's what we are. But we're our own thing too.
 
When we started we wanted to take 3d technology, new digital satellite imaging technology, and broadband and make a map of the world, unlike any map that have been made before, a map of unprecedent level of detail, built on imagery for the entire planet (google earth), so we started with that. We actually were acquired by Google along the way and with help from a bunch of other talented people inside of Google we were able ultimately to realize that vision to build a map and put the whole world on your desktop initially, and then later, we brought it to the palm of your hand.
 
So after we built the map, people started doing all kinds of crazy things with maps on thousands of mashup sites. St we started thinking, well, what can you do with this geographic substrate of the world?

So we started Niantic wanting to explore what happens with maps and location wearable technology … By the time we spun out of Google, Pokémon Go was in development.
 
During development we identified 3 key design themes
 
Exploration
The key thing that we settled on was the idea that in every neighborhood, there's probably a story that's interesting. There's a mystery that can be built there, everywhere in the world.
 
Exercise
Being computer people and spending a lot of time at our desks we realized that everybody needs a little nudge sometimes to go outside and get their 10,000 steps in or get their daily workout in. Good. If you get tired of sitting inside, we've created a nice space for you out there.
 
And we heard back from some of the early users of our games from Ingress that people who didn't consider themselves athletes really appreciated that gamified kind of nudge to exercise.
 
Social
This really came from feedback, not from our own insight, but from what people told us when they were using our products.  They love the opportunity to meet new people and have something to do together with friends, or family, so we adopted that.
 
EVALUATION
 
Exploration/Exercise:
We measured the total number of places that have been observed, as you know, special unique places in neighborhoods where people live, that people have photographed and named and described and added into this global game board which now numbers in the millions, so it's growing rapidly worldwide.
 
So this idea of stitching together a global game board, a place that we can play out all the interesting nooks and crannies in the world is going pretty well.
 
Social:
In terms of social we measure it in terms of the formal connections that people form in the game. We added the friend feature and Pokémon Go last year so you can identify your friends you can exchange gifts with those friends, you get bonuses for playing together and raids. So now there are over 190 million connections.
 
And we also measured in terms of the events that we hold. Has anybody been to a go fest event or a Pokémon Go community day? These events look a little bit like music festival if you've been to like Outside Lands or Lollapalooza or something like that, maybe combined with like a healthy dose of Comic Con, with a little bit of five K, if you can, like, combine all those ideas. Last year, we had 3 million people in total come to these events. We've had events with over 100,000 people. So this idea that games can be part of that festival/outdoor world is very much true.
 
By having these big festival style events, families learn about the history of the town in this really fun interactive way. We’re overlaying gameplay onto an existing kind of civic festival, dramatically increasing their attendance and drawing in an audience that wouldn't otherwise attend.
 
So we want to more events like this. And that means opening up our platform, so that many people can build hopefully really fun experiences on top of it.
 
LEARNINGS
 
BROAD INCLUSION/APPEAL IS POSSIBLE


Whenever I first started investigating the idea of us doing a game, people talked about casual games, and mid core games and casual games had a certain look and feel they attracted a certain audience, but mid core games were for real gamers, and they had a completely separate dynamic to them. We didn't want to put ourselves into a single category. We wanted to build something that was broadly appealing, but you know, we had ambitions around retention. We wanted it to be financially successful, so this was a risky shot for us.
 
And what I'm reporting back to you is that it is possible. I'm going to call them accessible games, that appeal to males, females, people from all walks of life, multiple demographics.  Based on third party research polls we see more than 40% of the players are women. We see underrepresented minorities in excess of 30%, so gamers are not a subset of the world, but look like everybody else.
 
MINDFUL/MEANINGFUL DESIGN

A. Short AR sessions

We know that people really don't like to hold up their phones for a long period of time. We think the right session length for holding up your phone is not more than two to three minutes, for example. So if you think about AR you want to think about it as a type of play that happens within a broader game, where a lot of the gameplay is not happening in AR mode.
 
B. Social sensitivity

You also need to be aware that there’s social stigma so when you hold up your phone and wave it around, it looks like you might be taking a picture of everybody who's standing around you. This is a big drawback to AR in certain situations and it's something that really needs to be designed around.

C. Efficiency

By that I mean two things:
  1. Device efficiency, because there's a power drain that one has to take into account
  2. Game loop efficiency. If people are power levelling through a game, they're not happy to take that extra time to instantiate an AR session and play in that way.
 
So short sessions and really thoughtful design are important, we think the right approach here is to design things for AR that can't be done outside of AR. So. ensure that it’s obvious why you're there: you're have an experience that you can really only realize through AR.
 
The other thing that I want to sort of flag here is you hear a lot of hype in the industry, about augmented reality but there are inherent limitations to AR and phones.  I, for one, don't believe that AR is going to take over the phone and everybody's just going to walk around in AR mode all the time. But I do believe there's huge potential, and I believe that potential is going to come in future devices like augmented glasses.
 
So what's the killer feature for a real world AR game?
You know, some people ask us a lot and I don't think it's the AR, I think AR is nice embellishment on top. It's the icing on the cake.
 
The things that we think are really the killer features are our core principles. That's why we adopted them: exploration and exercise and real-world social interactions.
 
MAPS FOR MACHINES

We made Google Earth and Google Maps for all of you, for people. It's designed to be a fun, friendly, accessible UI for human beings, so that we can never be lost again.  We can navigate wherever we want in the world. The AR maps that we're building, are built for machines, so it's a very different kind of map in some ways.  It's in the service of helping us be better human beings and those computers, our cell phones today, those computers, maybe glasses in the future. And by the way, they may be robots beyond that, because robots that want to move around in the world need that very same kind of precise localisation. So what the map does by accumulating data is to allow the computer to know xy latitude and longitude exactly where it is in the world down to an order of centimetres.
 
CO-OPERATIVE MAPS

What does detailed AR look like? So you start with lots of images, many different ways those images might come into a system. But from that, you're going to derive a very, very detailed three-dimensional understanding of space. (shares recent research and development effots)
 
At this scale, the world is nuts. Stuff changes all the time. It's incredibly dynamic. So this is probably a map that never gets finished. Some things actually change second by second. So the idea of mapping them in advance is a non sequitur. People walking down the street. cars driving down the street, you can't create a map of that. So that's something that you actually have to understand in real time, if you want to augment the world in a way that's realistic.
 
So in our view, we think this kind of thing can really only be built in a cooperative way by people that are using a system so a collaborative ongoing effort to constantly map and remap the world. An evolution from photos to points to putting holograms into the world.
 
AR GLASSES = FUTURE PLATFORM SHIFT
 
But ultimately, the question that's interesting is, is it something that is meaningful with a capital M? Like is it's something that's going to affect our lives in a significant way and many people's lives in a significant way?
 
What is happening on handsets today is a warm up, that's the pregame. What's gonna happen with glasses in the future is the real deal, but I think it's one of those platform shifts. You see it once every 10 or 20 years. We saw PCs, we saw the cloud, we saw mobile, I think AR is one of those shifts, so it's something that I think is worth our time to understand. It's worth time and money to invest early in.

Hanke, John. 2019. THE STATE & FUTURE OF AR GAMES: ROSE-COLORED GLASSES. In GDC Vault. U.S.: YouTube.

 

Share

4/27/2020

DESIGNING FOR THE BODY IN AR & VR

Read Now
 

​Understanding The Body's Role in VR & AR Game Design

In this 2016 VRDC session, Funomena lead designer Robin Hunicke explores some of the core challenges and opportunities that the body presents in VR & AR development, from controller design and gaze inputs to body posture, movement and gesture design.
Today I am talking about embodiment as a design surface for designers in VR and AR titles.
 
PRESENCE
 
The eyes, the ears, and in this case, now, the hands and the  senses on our hands, can be engaged in experiencing what we like to call presence, the psychological state or subjective perception in which even though part of an individual's current experience is generated by and or filtered through human made technology part or all of the individual's perception fails to accurately acknowledge the role of technology in the experience. It's a kind of a strange way to define it. But I think it's interesting to think about it this way. Your body is being fooled into believing that what is it is experiencing is real. We’re really tricking your body, and this is creating a space where designers can and will explore how presence affect one's experience of playful content.
 
HANDS
When you're wearing a head mounted display the hands are very important. And there's a lot of work being done right now to figure out what those hands will be. Why are we obsessed with the hands? Anyone who's tried to work with VR knows that when you are in a space where you're experiencing reality, and then you can't see your hands, you feel weird. They call it the rubber hand illusion, the idea that there's a physical object, and there's no response to it.  Trespasser, the video game from many, many years ago famous for this sort of disembodied hand problem. It’s really important to display the controller directly having an abstract hand model, or showing the primary game object or tool or brush, whether that's a hammer, a gun, one, whatever it is, or some combination of these seems to be super, super important for engaging the body.
 
Games that do this well are about touching stuff. They're about, you know, being in the space and just like poke poke poke.  Jenova Chen, who led the design on Journey, used to say that when you're in an inner interactive virtual space, and you're moving into for the first time, whether it's a VR experience or a classic video game experience, that you're like a baby, just pushing every surface trying to see what happens. And these games really engage you that way. So while that's not news, what's interesting to me is the difference between the power grip and the precision grip. So if you think about the controllers, I want you to put your hand out in front of you, and make the gesture that you make when you grab a controller. for VR, it's a power grip.
 
THE POWER GRIP
It's a force grip, and when you make this grip with your arm, you put your hand out in front of yourself and you grab space, like really do it, like retain your Wing Chun, you know, advocate and you're gonna do the on- inch punch. When you do this motion, you engage your fist, your lower arm, your shoulder, your bicep and your back muscles and your torso, the power grip has this effect of immediately stressing your body and it raises your blood pressure and your heart rate. If you do it with force, if you really try to do it, you can actually engage all the muscles in your arm and when you do a one inch Punch in Wing Chun, you can really crack someone's skull really hard. It's a really, it's a very powerful tool, the power grip and we use it all the time. When you swing an axe. When you punch somebody, when you do a really vicious dance move, you know you're using the power grip. Now, the other grip is the precision grip.
 
THE PRECISION GRIP
Pretend we're conducting the score for journey.  It's a very different feeling. What do you engage in when you engage the precision grip, just your hand, just a little bit, maybe on the backside of your arm. Your body is telling you. This is action. This is thinking.  This is why artists love is to have a tablet with a pen instead of a mouse.   That the relaxed pose of painting and drawing.  It’s a physical thing and t the tool itself engages the mind in a way that's really insightful and has worked for thousands of years.
 
So when you're using a power grip, to do precision activities, how do you translate that body feedback? This is a question that all of us get to work on. Now. What is the feeling of painting when your fist is closed? It's not even finger painting. It's like punchy painting. It doesn't really feel right; when you open your hand it feels much better.
 
GRIP CONTROLLERS
This desire to create a sense of precision was something that was at the core of Luna, when we started working on it, we actually started working on it, not for VR, but for the Intel gesture sensing camera. And I really wanted to build a game where you could just touch the world with your fingers. In the very first prototype that Scott Anderson ever made, you would press the buttons and it was a little pair of chopsticks and they could just use these two pieces as chopsticks to pick up little things. So immediately with the capture buttons and the pressure, he started thinking delicately so translated the power to something precise. We actually modelled and physically sculpted, what we call the grasper for the game, which is like a dinosaur claw, or bird claws. It's also like a lotus.
 
The other thing that we decided to do was to de emphasise the butt of the controller and grasper thing and focus on the tips. And to give it a brush light quality, it's still not finished. It's a work in progress, but when it grasps it becomes more of a pod, more subtle. Solid in the tips, and then it opens again. So we've got this feeling we really wanted to have this hand presence that would make you want to touch the world in a different way. A hand model is helpful, but it may not translate the action, the way that you wanted to.  something
 
EMBODIMENT
Best practice literature advises you to put a body in there, that's great except that It can be pretty disturbing if the body isn't your body.
 
A very interesting experiment was done with body image and owning an underweight or an overweight body for people with eating disorders, looking down and seeing the body that you actually have, versus the body that you think you have, and then having a discussion with therapists about it, link to the paper in there as well.
And what we're finding right is that the lack of realistic body identification conveys difference to us automatically. Again, if you're going to put a realistic body in and I'm moving around in the body, but it's doing unrealistic motions, or it's not totally smooth. The elbow joints to the arms and all this other stuff are not responding the way that I want them to then I will find it really difficult..
 
We've been seeing some interesting work on gesture, and reaction and celebration, in embody avatars in social spaces. In this context, say I'm Lucy Bradshaw. I'm not going to be seeing myself this way. But I'm going to see them that way. Right? I'm going to see my hands, but I'm gonna see them. And so we're trying to kind of mix that space. And when you look at spectator view mode and the way that we're embracing this, it's also we're trying to get as much out of the the motion of the body without actually having to replicate the physics of it.
 
So the body is back sort of, when it comes to actual avatars. VR is a little bit like puppeteering right now. So when you think about gestures, and the body if I can't see the whole body? And I'm  just seeing my hands and I'm trying to do ninja moves. What are you seeing? You know, are we simulating the rest of that body? How do we how do we perceive that? And specifically, if you're going to wave your arms like a ballerina, you're just going to see the tracers on my hands, right, like, just for a minute trying to make a balletic movement with your arm.
(Shows examples of different movement options in games)
 
BODY = EMOTION
You can also use the body to create emotion e.g reaching up, striving, sitting back and relaxing, rhythm etc
I like face tracking. It's better at knowing what you want than you are.. If you ever go to dinner and watch two people on a date, and you watch their micro expressions that they're not catching because they're so busy trying to mask their own feelings.
 
EMBODIMENT FRAMES
When you do a close up in a film, you feel emotional. You know, when you bring things into perspective with the person at eye level, suddenly your heart opens up.  That’s basic film technique, but when you bring it into VR… now you're really first person!
Wayward Sky does a great job of this, playing between the two camera frames, even though technically it's not a frame. It's an embodiment frame.
So, it's not about the camera frame anymore. It's about the body frame.
 
(Shows footage of Luna playtests)
 
EDUCATION
At the same time, that we're thinking about how presence influences gameplay, we're also seeing how presence may in fact to influence education, there's a lot of stuff if you just Google VR and learning, we're doing a way better job actually of selling this, even though we haven't actually implemented any of it yet. It's very interesting to think about moving away from the blackboard and sitting down and actually, you know, working in a virtual space on a task. And the words that we use to describe this are body learning or self-determination in classrooms. And there's a huge interest in this, especially in underserved communities where education really sucks.
 
Engaging in a task or teaching others how to do something is the best learning experience. And sitting listening to someone talk, which I'm doing to you right now is the worst.

Hunicke, Robin. 2016. Understanding The Body's Role in VR & AR Game Design. In GDC Vault, edited by GDC. U.S.: YouTube.

 
 

Share

4/20/2020

BUILDING SOCIAL WORLDS: LESSONS FROM MMOs

Read Now
 

​Still Logged In: What AR and VR Can Learn from MMOs

In this 2017 GDC session, `Ultima Online MMO designer Raph Koster talks about the social and ethical implications of turning the real world into a virtual world, and how the lessons of massively multiplayer virtual worlds are more relevant than ever.
Social spaces aren't just games. When I was making Ultima Online, I designed places and worlds and societies that just happen to have a lot of games in them. Really, most of the work was going into simulating things like tailoring and rabbit hunting and running a shop. So, if you are making a social VR or AR space, you aren't just making a game, you're designing a society.
 
MAKE ENHANCED INTERACTIONS ON PURPOSE, NOT BY ACCIDENT
Do you recall the case of the virtual groping in quiVR? 
A woman logged into this game about doing VR archery, and a stranger named Big Bro, I think walked up and started virtually groping her. The company didn't know about the incident until an article was written about it. To their credit, they immediately put in a bunch of features so that gropers couldn’t access people.  Bodies could not see one another anymore. When they came into close proximity, players could expand what they called a personal bubble and they added a new gesture that they added a new feature where you can create an exploding blue force field that turns the harasser invisible and prevents them from interacting with you. The company was very upfront and immediately posted a post mortem about what had happened and encouraged everybody to take this stuff seriously.
 
Ideally there would be other players on hand, people who are logged on 24/7 monitoring behaviour, who could make a harassment call, turn Big Bro into a toad and ban him for life. This is actually best practice in online community spaces.
 
Because if you host an online community where people can cause harm to another person you are on the hook.
You're the government of that space, you're hosting real people.
 
CREATE A DUE PROCESS
You need to institute a rule of law. That means having a code of conduct and user license agreement, Terms of Service written in plain English that every player affirmatively signs up to the first time they log in and they swear an oath. I will abide by this and it needs to be posted everywhere. And you need to be reinforcing this as a cultural thing every day.
 
You don't get to abdicate doing it. It has enormous benefits, just not only as a cultural thing, but also for clearing up edge cases, because you're going to be fighting an awful lot of edge cases.
 
DOCUMENT WORLD ACTIONS
The most concrete practical thing is a tool called the circular buffer which is simply a recording keeping track of the last minute to five minutes of every single interaction each client sees and storing it, discarding the stuff that's over five minutes old and always entering the most recent stuff so that when something like this happens players can hit the Report button to ensure that they have impartial evidence to hand which can be adjudicated.
 
Believe me you will have cases where it will turn into one says the other says if you don't have tools like this. 
 
You need to have an incontrovertible evidence. Voice logs aren’t enough.  You need to record every single gesture so that when somebody does this, the CES person can replay it back and verify who started the fight.
It's not going to be cheap.
But you need to do it.
 
CREATE PERSISTENT ACCOUNT SYSTEMS
A key lesson from all kinds of social spaces in virtual worlds is that offenders repeat. The number one killer on Ultima Online was personally responsible for murdering over 4000 people. And if your social VR or AR System does not have a persistent account system with persistent identity the players invest into, you're effectively making every player get away scot free. Because all they need to do is create a brand-new account every single time they log in.
 
Actions like this matter more in VR, precisely because VR embodies us and makes us feel like we're really there.
 
If you haven’t read the original rape in cyberspace article that’s homework item number one. This was an article that came out in the Village Voice in 1993, regarding the first major documented virtual rape incident.
He didn't even do it to their actual bodies. He did it using this technological tool that was available in this game called a puppet. And the puppet all it did this was create text and make it look like somebody else was doing something. The victims were actually just standing there and they could mute and block him all they wanted. But it didn't matter because it looked to everyone else in the room like Mr. bungle was getting these poor people to do horrendous things to themselves. In 1993, this became a bit of a cause celeb because people started arguing about whether or not that was really rape. I mean, it's online. That can't be real, can it? We know better now, I hope. Although, and we'd have to see. I think there's plenty of Twitter trolls who probably still like what's the big deal, right? But I'm pretty sure that right now somebody is going you know, it'd be awfully cool to do user generated content and social VR, and somebody is probably inventing the VR version of that puppet right now.
 
And of course, the most famous incident I encourage you go look up the video on YouTube. It's, it's astonishing in a surreal kind of way. This poor woman Auntie Chung was literally attacked by a flying swarm of thousands of penises that's world around her in a tornado.  If you run that world that's on you as much as the troll.
 
Every feature you make should be looked at first as a weapon.
How will a player abuse someone else with this?
Odds are you aren't actually evil enough when you play test.
 
How many of you played A division? It’s an MMO shooter on the console with a triple A team in which they made all the players collide with all the other players and then they had single quest-givers sitting in the Central lobby.  So there were lines running out the door and people would barricade the door to prevent other people from getting quests, right. My personal experience with making that mistake was in the alpha for Ultima Online … players would let others out (who would then often try to block access for others) and then they would rush to the windows and shoot them in the back as they walked away.
 
LIMIT WHAT EMPLOYEES AND ADMINS CAN DO WITH THEIR IN-WORLD POWER
And the people you hire will hire other people who will in fact do things like consider it a perk to go over to people's private chat spaces and watch them as they engage in you know, very private things.  You have to hard code in limitations, log off every kind of extra normal command, the ability to grant or revoke any kind of capability granularly per individual, and only grant the powers that are specific to the scope of that individual's job.
 
In the early days of MMOs we had issues where game masters were having hot tub parties with players and trading sexual favours for endgame items. They needed the ability to spawn those items for their job. Unfortunately, we didn't have logging in the hot tubs, so it was hard to prove that these situations were happening.  We have drifted away over time from having in game admins and the reason is in order to minimize the possibility of personal relationships forming between customer service and individual players because corruption happens.
 
DEVELOP LEGAL INFRASTRUCTURES
MMO started as the wild west of gaming and ended up being the single safest genre for women to play in gaming, because everything else is unmoderated voice chat.
Whereas MMOs have persistent identity, have persistent community have the logging, all this stuff that you talked about, and that's something that we fought for And Blizzard led the way and made it real. And you can actually see the gender split of that audience of Wow, just go from 20-80 to 50-50 over the course of a decade. And I think that's important for us to realize it took a lot of work.
 
Also, be aware that if your virtual world is successful, your digital objects are going to have real money value. There's no escaping it, so simply assume that anything you create as a digital object has real world value. And people will be doing things like sleeping for it, paying for it and stealing it. Just go in the door, assuming that that is part of this field.
 
Fundamentally, you should not be opening a virtual world unless you think in terms of this kind of legal infrastructure. You're the government of this space.
 
TAKE RESPONSIBILITY
Furthermore, the people who use virtual spaces are very often using them for therapeutic purposes. That means a higher incidence of individuals inside your space who are dealing with psychological stress of various sorts.  There is not a single, large scale commercial virtual world that does not have the FBI and suicide hotlines on speed dial.  The number of runaways that we tracked down for the FBI and the police, while at Sony Online, was easily in the triple digits.
 
Be prepared to deal with manifestations of mental illness. Those folks will frequently be calling for assistance within this virtual setting. And you have gone and built their home away from home, the place that they use to cope, the place that they're using in order as their support system. Often the friends that they've made in your space are the ones that literally keep them alive. You are holding their lives in your hands. So you have to take it seriously.
 
LET PEOPLE PLAY WITH IDENTITY
One of the huge mistakes of social VR is that they're forgetting the joys of being someone you aren't. That has always been a fundamental premise of engaging in a virtual world. Even in the normative modern MMO worlds, cross gender roleplay is still incredibly common. 
 
One of the beauties of the early internet and virtual worlds in particular is that nobody could tell you were a dog, only now we can hear you barking on Discord.
 
This is why I hate voice chat. Because it's one of the many things that is taking these tools away from people.
 
The original world which Richard Bartle designed was intended to be a blow against the British class system. That is why you can level up and be anyone in anything. Because he felt like he'd grown up in a very restrictive world where, oh no, you have the wrong accent. You'll never get a doctorate.  And in fact, a lot of the earliest virtual spaces were safe spaces for they were safe spaces for queers, and for the asexual who prefer the idea of having hundreds of little tentacles for genitalia.
 
It extends beyond voice chat, though. How many of you are short? Like me? All right, short people, you guys know that you have lower lifetime earnings than tall people, right? You get less promotions and lower salaries. So it turns out that halflings and gnomes in fantasy role playing games earn less XP per hour and level slower. Because we import real world biases into these worlds.  I have lost count of the number of social spaces that don't have race as a customization option that it’s embarrassing.
 
BUILD EMPATHY WITH IDENTITY INVESTMENT, TEAMWORK AND EMOTE SYSTEMS
As people enter a space, if they don't have a stake in the space, if they don't have persistent identity, it's very easy to behave as a virtual sociopath.  They don't have any community ties whatsoever. So you have to start creating investment into an identity really early on. So those of you who are making systems that are dropped in, don't.
 
You also can't use things like credit cards to block for example, average U.S. household I think has 13 credit card numbers. So the real way you have to do it is by causing personal investment in the identity they're creating, so that they don't want to lose the investment they've made.
 
There are two classic ways of doing
1)gameplay that involves teamwork.
2)Incorporate an emote system. Best Practice emote systems involve automatically parsing text chat and pulling out the emotional cues that people are already sticking in there like Smiley's. Otherwise, if you give people manual, complex puppeteering structures it’s too hard.  Yes, there's a subset of people who are good at puppeteering. Most people are not however.
 
LIMIT SOCIAL OPTIONS TO SUPPORT POSITIVE INTERACTION
 
While voice does convey a significant amount of emotional content, you can't expect people to be trained actors and know how to convey emotion. If they need to do is convey real emotion, they will prefer to use Skype where they get all the cues.  So my actual advice to social VR people bluntly, is to sidestep as many of these problems as possible. Don't have large scale worlds with a bunch of strangers interacting with one another in an unsupervised fashion.
 
Stick to small groups, people who mostly already know each other.  Also, make the avatars relatively unrealistic so nobody cares about the fact that the emotional cues are wrong.
 
If I were making a social VR thing that I wanted to be a commercial success, I would go clone a game where you sit at opposite sides of a table.  You can wave your hands and your head, all you want. You can build crazy avatars, because it's all about personal expression, getting to know the other person. It's limited to two people at a time. So, if you bail or block, you'll never have to deal with that person again. It just sidesteps all the issues, right?
 
Play to the strengths of VR today.  It’s clumsy, but you can play with that.  I think that is part of why something like Job Simulator is one of the most successful VR experiences we have. it embraces the limitation.
 
All of these issues get dramatically worse once we're talking about social AR.
Rendering is by far the least important part of this entire field. The representation is not the part that really, really matters. What matters is the action that's going on on the server, which is where you actually track things like the avatar profiles, their histories, the location of objects, how they interact with one another.  
 
Text games actually embody you almost as powerfully as VR does. That is why text muds are also great role models. For example, consent systems for paired emotes was a very, very common design trope in text muds. Where you can't just hug somebody or grope somebody, but there is almost a two-factor authentication process the players must go through in order to engage with others.
 
That stuff popped up in the text worlds first, because in the text worlds, there was no distinction of distance. Everybody was standing on a pin in each individual text room. So, all it took was one word for them to be interpenetrating with your avatar.
 
All of those things will happen in the real world. It is not going to be very long until there is a template ID associated with your washing machine, and it will have its own character sheet that will have its repair history. That's not far-fetched. So if you start designing your social structures in a way that incentivizes players to commit virtual violence against one another, they're probably going to start doing it for real.
 
MISTAKES ARE TOO EASY: DEVELOP A PLAYER’S DECLARATION OF RIGHTS
All of these things can be quite unintentional. Pokémon GO I'm sure absolutely did not intend for its map of spawns to be racist and classist. And yet, it's really hard to find Pokémon in poor black neighbourhoods and in rural areas, because it's UGC map happened to be built out of kids who are going to Stanford and Berkeley. So we do have to think about these issues.
 
It might be that someday legislation will that protects virtual layer annotation rights. That might actually be the only solution to something like Pokémon spawning in your backyard.
 
As yet there are few clear rules. What if somebody happens to start spawning Pokémon on top of a Black Lives Matter rally? It is trivially easy for an operator of a virtual world to pull something like that. We've seen the early Harbinger's of things like that. Anonymous got its start by holding a flash mob in front of Scientology facility in New York City. We kind of trained people to do this.
 
If there was only one Pikachu to have in a Pokémon Go flash mob, I think we would have seen Pika murders by now. It's human nature.
 
And there are other risks - Just yesterday, it was discovered that a cloud toy company was storing the voice chat logs of all of the kids that ever talked to their plushie on AWS instance on a database that was not password protected.
 
To my knowledge, exactly three virtual worlds in history have bothered to adopt a Declaration of the Rights of players.
 
Start thinking in terms of the rights that players should be having in these spaces, because soon enough Internet of Things is going to hit. This is also not fiction
 
HAVE FUN
Let's have some fun. Seriously, let's explore, have some fun.
Just get your homework done first. Thank you.

​Koster, Raph. 2017. STILL LOGGED IN: WHAT AR AND VR CAN LEARN FROM MMOS. In GDC Vault. U.S.: YouTube.
 

​

Share

4/15/2020

ACCESSIBLE AR AND VR MEDIA

Read Now
 

MAKING AR AND VR TRULY ACCESSIBLE

In this 2016 VRDC session, Minds + Assembly's Tracey John, Radial Games' Andy Moore, Tomorrow Today Labs' Brian Van Buren, and independent designer Kayla Kinnunen explore the challenges and necessities of making sure virtual reality experiences will remain accessible to players of all backgrounds. ​

18 -20% of the population have some kind of disability (visual impairments, hearing impairments, cognitive impairments, motor and mobility impairments) and this figure increases rapidly with age.
 
There are some people for whom VR just isn't an option e.g. anybody who wears glasses may struggle with VR. But many more can gain access, so long as the software is designed in an inclusive way.  Accessibility forces you as a developer to think outside of your own box – to be more than the bad designer only designing for themselves e.g. If you build a room scale experience, that game should also be playable by someone who is seated and can only play with a single hand.
 
If you design an experience for an abled body adult within the normal height variance, you are writing off a large number of people, you're even blocking off yourself at certain age groups e.g. small children don't have the same reach, or cognitive capacities. 
 
DESIGN QUESTIONS
 
– Think how the human body interacts and experiences space with the game.
- Ask: what if I have to do this with one hand?
- Ask: what if I have to have a standing-only mobile experience that someone who seated needs to use?
- Ask: should I be putting objects on the floor for people who have difficulty bending over?
- And question why you should force somebody to do a Konami code, when just pressing the trigger button will do?
 
DESIGN TIPS:

There’s a lot of knowhow in this area now…go to gameaccessibilityguidelines.com, go to Microsoft's webpage on how to create an accessible video game and take on board those suggestions like MAPPABLE CONTROLS for people who can’t easily press buttons.
 
1. CREATE ALTERNATIVES

Try to be comprehensive in what you're designing so when you're designing a space in VR use the entire toolkit you have available.  Instead of creating a cutscene, it's about designing how to make people to look at things, and how to use colour palettes, or designing a comprehensive cue, something that uses audio AND lighting.

e.g. When I was designing an earlier iteration of our game, in order to get people to turn their heads, I would use audio cues because positional audio is such a powerful tool in virtual reality, but people with hearing impairments wouldn’t respond, so that forced me to redesign our trigger system and redesigned our cue system making sure that there was a visual aid as well, like targeted lights and flashing arrows.

When you're trying to design with accessibility in mind, you come up with solutions that end up killing three or four birds with one stone. So. if you want to design an experience that's playable on a desk, instead of at room scale, you've now solved the problem of people having to bend over because now everything's at desk, and if you make the desk height, variable, that supports small people like children. Even the challenge of limited mobility is solved.  All these problems are solved sometimes with a very simple solution. You can actually open up whole new markets, not just for people with disabilities, but people with small living rooms also benefit…and all of a sudden you're now solving for platforms that maybe have more limited tracking areas as well.
 
2. EXTEND HUMAN REACH

So, imagine a default action where you walk over to something, you pick it up, you walk over to where you want to put it, and you place it down - instead one group created a ray cast from the controller to act as a highlight so that whenever that red cast collided with another object in space, pulling the grab button would just suck that virtual object right into your hand.  To extend that they also allowed for a half Press to hold an object in place where it wasn't at a distance. And so that would allow people to move things around. That would also allow them to kind of hold down a half trigger and move their arm backwards and actually extend an object out. This allowed the user to have full control of object placement within a three dimensional room scale space, all without actually moving from their chair.
 
3. MAKE PLAYERS MORE MOBILE
 
The easiest things you can do for mobility is to have teleporting in your game. It adds a way for somebody to navigate around your space in a very intuitive way, especially if you also add rotation on the teleport so that you can actually highlight a space where you want to go to and then immediately pick a direction you want to face. This would allow somebody that's chair bound, and only able to face in one direction to put themselves at any position on the map.
 
Alternative mobility design strategies -

 
To keep the pleasure of movement and gesture we added a grabby claw. You can take a stick and telescope it out using your physical motion.  And you can hit the buttons on the controller to grab things remotely and still move them. So, you still get a sense of connectedness and presence.
 
And instead of going for teleportation, we opted for scaling. You can scale the whole environment so that your natural arm reach can now reach across your room instead of having a teleport button.
 
Not every solution works in every in every application. But if you think about these kinds of things early on in development e.g. knowing that scale is an option really opens a lot of doors.
 
4. TEST WITH DIFFERENT SHAPED BODIES
 
If you’re trying to make a game accessible later in the development process you have to just do a ton of user testing and watch for the pain points, which could just be like a sigh of frustration. If someone keeps having to bend over and pick something up.  These sorts of barriers are likely to first show up as annoyances. to people that are able bodied.
 
 e.g. One of the various near field interactions we were asking players to do is to play a game beer pong. And there was a ball dispenser that would dispense the ping pong balls that came out and would throw them and there were visual rewards if you knocked all the balls out of the out of the container…for Wesley, who is our artist who is six foot seven and has a bad hip and bad knee, for him to bend over and repeatedly try to pick them up was very painful. So we ended up putting a detector in there, so if a nearby space is ever empty of balls, the programme then pours out a bunch more balls and refills the container so that people don't think of that as being an accessibility concern.

People have different head heights – does this affect the play?
 
If you are at a different head height, there are multiple different ways to get around that-  scaling being one of them, being able to adjust the head height of the users is another one, as is designing the space so that it is customizable. e.g. make the height of the control panel adjustable.
 
e.g. in our game Fantastic Contraption your toolbox is a cat. If you double-click the cat comes flying and you can position the anywhere in the world by just like clicking your finger and calling the cat. But something that a lot of people discover fairly quickly is if you pick up the cat, and you place it anywhere, at any height, not even on the ground, it'll just hover in that position, so that you can customize the play space and put your tools wherever you want them. And we have made prototypes of other games where every metallic control panel has bolts, and if you undo the bolts, you can pick up the whole control panel and move it. So, it's not difficult to build in these kinds of features.
 
Install your game on someone's computer and watch them play it at home over time.
 
We discovered that many Vive players don't use headphones at all, so you can't rely on audio cues at all.
We also found that if you don't support seated play, then good luck having a successful title on, say, psvr.
If you don't support forward facing gameplay, then good luck launching on Oculus.
… if you solve for disabilities, you also grow your market incredibly.

Reach out to your local disability meet-up groups and invite people to try your game out.
And remember to play test with people who have different sorts of disabilities because motor impairment is very different from mobility impairment, which is very different from hearing impairment, which is very different from vision impairment, which is very different from cognitive impairment. Each have their own specific needs.
 
5. MAKE YOUR OWN CONTROLS
​

As a wheelchair user, I turn in a tank fashion where I will turn one wheel one way and one wheel the other way in order to do a tight turn. And for the Vive controllers, if I have to hold down a button in order to keep holding on to an object, that becomes very difficult for me so I'll just put the five controllers in my lap and move to the area that I'm at turn or do whatever. Okay, same problem with the Oculus, the sensor ring for the Oculus Touch gets in the way and I'm using the butt of my palm in order to turn and this for something that requires you as a user to make quick turns and quick movement changes it really screws things up - so Valve has made the hardware of the sensors available so you can partner with them and modify the sensors to create your own controllers and that's really going to make it very easy for people to come up with personalised controls, so there's no reason that you can't go to your local maker-space with a 3d model of a controller and build it. And that's going to make it a lot easier if you need to design controllers that have straps on them to strap the controller into someone's hand because they don't have the grip strength in order to keeps a controller to their hand. Or if someone needs to have larger buttons, because they have less motor control those these things, the hardware solutions are there. It's just building them.
 
It becomes important to understand what are the base actions that you're trying to accomplish, and then allowing for the possibility of end users being able to remap those actions to different inputs.
 
And the great thing with VR is it allows for a lot more inputs to just happen naturally. So, you can use gaze, you can use head position, gaze will get even more higher fidelity when we actually have eyeball tracking… it's a puzzle. And I've seen some people experimenting with interesting gameplay designs where the hand controllers are intended for a second or third player and the main character only uses the headset.  We can get incredibly creative here.

2016. Making AR and VR truly accessible. In GDC Vault, edited by GDC. U.S.: YouTube.


Share

3/19/2020

Writing for AR and VR

Read Now
 

Escaping the Holodeck Storytelling and Convergence in VR and AR

At the 2016 Game Developer's Conference Rob Morgan who co-wrote J.K. Rowling's Platinum-selling Augmented Reality PS3 title 'Wonderbook: Book of Spells' and its sequel 'Book of Potions' shared what he's learnt about the particularities of writing for real world and embodied augmentation.

In a holodeck the only real thing is you and everything only has to feel real in relation to you.
 
This gets harder the closer you get to the user.
e.g. trying to simulate clothing is unlikely to work because we know intimately how our authentic clothes are supposed to feel.
 
AUGMENTED REALITY DIFFERS FROM SCREEN MEDIA
Unlike games, where players tend to embrace identity and power fantasies, participants on the holodeck tend to role-play as themselves and stick to their known hierarchies.
 
The job of a writer, therefore, is not to create somewhere believable, but create somewhere desirable…to help participants create their own willing suspension of disbelief.
 
Rather than generate an action figure to role-play, Morgan creates the possibility space around the player with lots of space for their own role-play. The player is their own hero and you can’t assume anything about them.  
 
e.g. There is a line in ‘A Book of Spells’ (an AR game co-created with J.K. Rowling) – "Are You A Wizard or Not?"...If a player didn’t open the wonder book for 10 secs, then the non-player character (simulated) narrator would ask…."C’mon are you a wizard or not?"    In user testing this proved to be an evocative hook, participants wanted to engage as soon as they heard this.

So, what does this teach us about writing for augmented reality?
- You are not in control - There’s nothing to stop your audience moving off if they’re interested in something else.  e.g. 1st time VR players will often test NPCs for realism by doing socially unacceptable things.
-The player’s experience of the real world control is a constant negotiation between what they can do, what they can’t do and what they can get away with….so there is a need to negotiate that with them – which means that writers aren't just storytelling, they're also curating.   
-In VR storytelling, far more than in flat screen games, players are interested in the reality of the VR so everything comes under more scrutiny, which means that you have to give them a way to make sense of every limitation and control (from within the narrative, or simulation).  e.g. If you want to lock players in a room until they complete a task there needs to be a reason, or explanation for the locked door such as air locks, decontamination chambers….and you need to put the story out there for players to discover.
-In VR the player is radically embodied in the simulation in a way that they’re not in an psychologically immersive flat screen game….so just the same you have to find a new way to talk to them
-How do you do this?  You can use the power of implication, by implicating them in situation.
e.g. To encourage players to press a detonate button it might be better to pare back the script  – so that rather than allowing them to hide behind a character, the writer instead foregrounds the player's psychology at that moment i.e. don't impose a reason for them to press the evil button, but leave that reasoning up to the player.  At various points in The Assembly  (another game Morgan created) players are accused of various things, but players decide what’s actually going on. 
- If you leave room for their own motivations it stabilises their identity inside the sim.  Challenging the user to justify themselves, by accusing them of a certain motivation and forcing them to come up with them own reasoning inspires them to invest part of themselves in what they’re doing.
Which is why he began a site specific mixed reality experience with an audio provocation to the user:
-Try to act normal
-Are they still watching?
-Only you can hear me
-Just act natural

These phrases mix the user's own internal response with the staged reality, and force the user’s identity to stabilise around a few important things…something is going on, something is not as it seems, you have a job to do, you have a secret, something important is happening, just try to act normal, everything else, all the context and world-building can be filled in later.

"….and that’s how we brought our players in, not by putting them in a costume, but by telling them they were already involved, that they had a secret to protect, that it was already too late i.e. before they realised it they were already wearing the Sherlock Holmes outfit because that’s what people do.  It turns out control and identity are kind of the same thing."


We have to work alongside the player – every time we try to define who they are and what they want then we’ll force them out of immersion just as quick as if we made them run at 50 miles an hour, when all they really wanted to do was stroll, or made them play Captain Kirk when all they wanted to do was be the bad guy.

Morgan, Robert. 2016. Escaping the Holodeck Storytelling and Convergence in VR and AR. edited by GDC. U.S.: GDC. 

Share

3/17/2020

Immersive INTERFACE DESIGN

Read Now
 

Immersing a Creative World into a Usable UI (USER INTERFACE)

UI designer Steph Chow discusses how to embed a game's world into its user interface (UI), and how to strike the right balance between player immersion and player usability.
​
Immersive game elements include characters, VFX (visual effects), environments and UI
 
UI is not just about bright juicy green buttons.  UI is part of an immersive strategy.
 
AR AND UI IMMERSION
  • A big focus on camera visuals needs to be balanced by minimal game controls
  • The balance between familiarity and fantasy is also important
  • But there still needs to be a way to imply the AR world when the camera is off
 
VR and UI IMMERSION
  • A huge opportunity for immersive, in-game visuals and physical interactions
  • Animation, audio and haptic effects can also enhance player feedback
  • But complicated tasks still require explanatory visual interfaces and accessible buttons.
 
ENGAGEMENT IN UI IMMERSION
 
Players play games not just because they are fun, but because audiences grow to love the unique worlds (fictitious universes) in which they are set.
Details that go beyond characterisation like typography, colour palettes, shapes and iconography can help to keep a player immersed and engaged.  Chow aims for a branded experience through both visuals as well as functionality.
 
The interface needs to be immersive and usable. 
The process of embedding a game’s world in its interface unfolds in 3 phases
  1. Research, 2) Exploration and 3) Iteration
 
  1. Research:
What is the visual culture behind your world?
This is the time to seek out the visual elements that make your world distinct.  This can be inspired by history e.g. the 1950s American iconography of the Fallout Series
Or by Nature e.g. Paradise Bay, reminiscent of oceans and rocks
Or by Subcultures e.g. Splatoon 2 takes its inspiration from punk references, including graffiti.
 
Chow recommends taking research beyond google searches – e.g.  watching movies, visiting museums

  1. Exploration
It’s important to explore the spectrum offered by your ingredients
  • Diegetic (included in game-world, so seen and heard by in game-characters) vs non-diegetic (only visible to players)
Diegetic is more likely to be fully immersive, easy to grasp narratively and preserves the 4th wall
But if you have a lot of information to show it can be buried in a diegetic look
  • can help guide players through complex tasks by clearly separating game-play challenges from the detailed content.  Needs to be designed carefully however to avoid distraction and further complicating the task.
 
Deciding between these 2 approaches depends upon 3 things
  1. Amount of platform screen space
  2. The complexity of the game mechanics e.g. simple game mechanic vs complex strategy
  3. How player is interacting with the game (touch, controller, camera)
 
  • Skeuomorphic vs flat
Skeumorphic (incorporating non-functional design elements) creates a sense of familiarity by emulating materials whilst flat design stays true to its medium (ignoring colourful details to focus instead upon the inventory, for example, and its function in the game).
It’s worth exploring different layout options to judge the best approach.
  • Layout
Can provide both immersion and usability – draws upon familiarity to improve clarity, makes functions and form clear in the visuals
  • Animation
Actions can communicate timing urgencies, be pinned to interactions, direct player attention and create a sense of pace.

  1. Iteration: To find the balance between narrative impact and usability
 
Things to evaluate
  • Readability – Has the interface been over-designed to the point of confusion?
  • Personality – Are the brand’s keywords visually implied in the design?
  • Implication – Are interactive and non-interactive elements distinct and clear?
  • Scale – What is the memory load of your design?
 
Also keep usability heuristics in mind
  • Visibility of system status
  • Match between system and the real world (e.g. using conversational language)
  • User control and freedom (e.g. providing clear abort, or restart procedures)
  • Consistency and standards
  • Error prevention
  • Recognition rather than recall
  • Flexibility and efficiency of use
  • Aesthetic and minimalist design
  • Help users recognise, diagnose and recover (e.g. with simple error messages that point to solutions)
  • (Easily discoverable) Help and documentation

Chow, Steph. 2018/2020. Immersing a creative world into a usable UI U.S.: Game Developers Conference.  

Share

<<Previous
Details

    Author

    The USW Audience of the Future research team is compiling a summary collection of recent research in the field of immersive, and enhanced reality media

    Archives

    October 2020
    September 2020
    August 2020
    July 2020
    June 2020
    May 2020
    April 2020
    March 2020
    February 2020
    January 2020
    December 2019

    Categories

    All
    Accessibility
    Accessible
    Accessible Media
    AI
    Alternate Reality Game
    Analytics
    Applications
    App Store
    AR Core
    AR Design
    AR Glasses
    AR UI
    Audience Journey
    Audience Journey Research
    Audience Of The Future
    Audience Research
    Augmented Fiction
    Augmented Reality
    Augmented User Interface
    Awe
    Chatbots
    Children
    Choose Your Own Adventure
    Dialogue Systems
    Digital Design
    Digital Fiction
    Digital Heritage
    Disney
    E-books
    Embodied Interaction
    Embodiment
    Escape Room
    Escape Rooms
    Experience
    Experience Creation
    Experimental Games
    Extended Reality
    Fiction Engineering
    Formats
    Full Motion Video Games
    Game Writing
    Ghost Stories
    GPS
    Heritage
    Heuristics
    Honeypot Effect
    Horror
    I-docs
    Immersion
    Immersive
    Immersive Design
    Immersive Heritage
    Immersive Storytelling
    Immersive UI
    Inclusive Design
    Inclusivity
    Interactive
    Interactive Factual
    Interactive Fiction
    Interactive Movies
    Interactive Narrative
    Interactive Stories
    Interactive Storytelling
    IOT
    LARPS
    Location Based Games
    Locative Media Technology
    Mixed Reality
    MMOs
    Mobile Games
    Mobile Phone
    Mobile Storytelling
    MR
    Multi-player
    Narrascope
    Non-verbal Interactions
    Para-social
    Participatory Urbanism
    Physical Interaction
    Pokemon
    Pokemon Go
    Puzzle
    Ralph Koster
    Social
    Social Game-play
    Social Worlds
    Spatial Interface
    Story
    Story Games
    Strong Concepts
    Tabletop
    Technology Acceptance
    Theme Parks
    Tools
    Tourism
    Tourist
    Ubicomp
    Ultima Online
    Unreal
    User Experience Design Guide
    User-experience Design Guide
    UX
    Virtual Reality
    Virtual Reality Non-fiction
    Virtual Reality UI
    Virtual Worlds
    Visitor Experience
    VRNF
    Walking Simulators
    Wandering Games
    Writing Augmented Reality
    Writing Virtual Reality

    RSS Feed

Proudly powered by Weebly
  • Home
  • Reality Bytes
  • Engagement
  • Intern Insights