Forms of representation in VR

I think one of the biggest strengths of VR is the fact that the player feels immersed in an alternative environment and feel an intimate connection to that world and objects within it.

Given this feature, I think this medium is fitting to represent building of relationships between people – a simulation of how people interact, connect, and bond.

Image result for vr friend
multinational friends in VR talking to the player

I first thought about this idea because of the environment we’re in at NYU Abu Dhabi, where we have students and faculty coming from literally everywhere around the world. I was just serving as a Peer Ambassador in the last Candidate Weekend where I talked to prospective students and realized how it is sometimes overwhelming for people who have never travelled abroad before. I think simulating human interactions through VR could help people like these students to know how to situate their thoughts and conversations in a global context.

In this sense, VR could be used as an educational tool for people wanting to learn about people hailing from all over the world. For this specific representation, it could incorporate intricate details specific to different cultures and societies by simulating the different reactions and responses from characters within the VR world coming from different countries. The player can meet and converse with these characters and be informed about the nuances of different cultures.

Such representation in VR provides an interface for interaction where people learn to approach people of different backgrounds, ask informed questions, and know what factors to keep in mind when conversing.

But this leads to questions such as

  • Why not just interact with real people?

I think such representation of multi-cultural and multi-ethnic in VR is definitely in no may suggesting that this is the only and most accurate manifestation of what these interactions would look like, but rather is a way for people to get started thinking about how to act, approach and behave when they are in such situations. There are benefits to meeting people like these characters in person, but for those who fear making mistake and want to avoid being overwhelmed, this is a good option to try out.

  • What does VR bring to this specific form of representation?

I think having realistic characters to guide you through human interactions can prove to be very important for people not just wanting to “practice” and get to know other cultures before meeting people in person, but also for those who actually struggle to socialize and are introverted – VR can strive to provide an experience as similar to reality as possible.

Form of Representation

As Bret Victor mentioned in his talk, The Humane Representation of Thought, “the powerful medium is what powerful representations to be spread,” the medium is what determines how and to what extent the representation can be executed to its full potential.

I think that the medium of virtual reality allows the user to get indulged in an environment that the user cannot or is difficult to get exposed to. For example, the medium of virtual reality can allow an ordinary person to feel as if he/she is walking on the surface of the moon, which only skilled and selected astronauts can do. In fact, virtual reality allows the user to be an active participant rather than a passive observant. What I mean by this is that with the use of the hand controllers, the user has the ability to maneuver what is inside the virtual reality. From this essence of virtual reality, I think that a form of representation that I feel would be suited for the medium of virtual reality would be “thinking,” more specifically creating “dynamic environments-to-think-in.”

Visual Image of Bret Victor’s Dynamic Environments-To-Think-In

In a sense, the medium of virtual reality “treat[s] the human beings as sacred” (Victor). Virtual reality can give superpowers to the user by giving the user agency to see and alter the virtual world they are in. Let’s say that the user is placed in a lone island and the user can see the setting through the headset. This user, if given the option, can perhaps light a fire or cook food, using the controllers at the deserted island. More realistically speaking, the user can craft his own image of his house in his virtual space, which he can use it to execute his thoughts, and also come up with new ideas. Such interface would trigger different parts of the users’ brain and allow the user to think in a different manner. Because thoughts and ideas tend to arise when one changes his environment, the medium of virtual reality would be effective in providing that alternate space.

Blog: Representation

In “The Humane Representation of Thought,” Bret Victor discusses different modes of understanding mediums, specifically the use of different sensory channels and enactive, iconic, and symbolic methods. At the moment, virtual reality is most typically represented using different gaming systems, such as the Vive, which allow virtual reality to be relatively dynamic. The user navigates and understands the game through action, image, and language-based representations. These games also appeal to various sensory channels; they are visual because of the graphics, they are aural because of sound effects, they are tactile because of the use of controllers, and they are spatial because of the 360 degree aspect, complete with moveable depth.

What is perhaps lacking in this current representation, however, is how the user can understand the virtual reality medium kinesthetically. There are not very many games out there, to my knowledge, that involve movement of the user’s body in a way that matches actual reality effectively. The bow-and-arrow activity in the Unity example world, for instance, does not accurately match how you would shoot an arrow in real life. It takes into account aim, a small pull-back motion, and the push of a button in order to shoot the arrow. When shooting an actual bow, there are several other aspects that go into how the archer’s arrow will shoot. The position of the archer’s elbow, for example, is very important. There is also tension in the bow’s strings that you are not able to feel in your fingers and arms with Vive controllers.

A well-suited representation for virtual reality, therefore, would be one that better takes into account the kinesthetic mode of understanding without compromising other modes of understanding the medium. Sensors all around the body to better map body movements, for example, could be a possibility.

Development blog #2. Mai, Max and Shenuka

Pre-development blog.

Hello friends
Our virtual reality is based on not a very everyday action, but it is done very very often – Watering the plants. We came up with this idea after we have met at the market place and one of the things that have inspired us was a man-eater plant from “Little shop of horror”.

https://www.youtube.com/watch?v=GLjook1I0V44
Here is the video of that plant 🙂

Our idea was to create a terrarium/greenhouse where the player would be placed and he/she will see a desk in front. So here is the storyboard.

There will be various seeds on the table, but if the person will choose a specific one, a man-eater plant will grow. Here are some pictures that really relate close to what we imagined as our environment

There will also be a man-eater plant at the back, but the person does not know about it unless he looks back 🙂
That is all for our idea, hope you like it 🙂

Development blog #2

Greetings!

Here is our development blog #2. We have created a work station and implemented interaction with objects. We have used the prefabs that were used in the “Steam VR” interaction example such as “throwable” script and of course the camera from it. It made it much easier since the camera contains the controllers. We created a working station which was taken from parts of the “Green house 3D” asset. We just took wooden parts and created a workbench and some shelves. After finishing with the work station and adding the “throwable” script to some cubes and spheres, we added a water jug and a pot and actually made the pot to hold objects (mesh collider), and now we can place the objects into the pot. We have also added the same scripts to the water jug. Now we were able to pick up the water jug and throw it (just for fun). After doing that we tried to add water particles to the tip of the jug to make it pour water. We have added the “Water FX” asset, but it had only rain. We have decided to change the area of the rain to a tiny area and make the source be the tip of the water jug. There were some problems with gravity and at that moment we have stopped 🙂

Here are some videos :

Development blog #3

Greetings. This week was the biggest move for us. We have finished the project and we have implemented a lot of new things and updated some scripts:
-Water from watering can fixed and now works only when can is tilted more than 45 degrees
-Added a seed interaction with the pot. When the seed is in the pot and is watered, it grows.
-Added some plants for the inside of the greenhouse
-Added more trees on the outside of the greenhouse
-Added a “small” butterfly 🙂
-Added a cage with a plant at the back
-Implemented sounds
-Added some throwable objects

Here is a short video of the project.

Project #2 Storyboard – Claire, Junior, Atoka

Everyday Theme:

  • Find your glasses!

Scenario:

  • Living with different people in the share house
  • User (A person with blurry vision/bad eyesight)

Setting:

Bathroom

  • Shower
  • Shelf
  • Sink
  • Door

Layout:

  • Rectangle

Storyline:

The user has just woken up and he is heading to the bathroom. The story begins from the point of him being at the door of the bathroom. Because he is sharing the house, he has three other housemates whom he is living with – all of them with some kind of vision problems (bad vision). There are four glasses lying around in the bathroom. Help him find the right glasses!

Original (blurry)

Four kinds of glasses:

  1. Double vision
  2. Zoom In
  3. Zoom Out
  4. Normal (clear vision)

The user will hear a voice when he puts on the wrong glasses like the following:

  1. Double vision – “Wow…I’m seeing double”
  2. Zoom In – “Hmm…this doesn’t seem quite right”
  3. Zoom Out – “These aren’t mine”
  4. Normal (clear vision) – “Finally, I see things properly!”

Storyboard:

Development Blog: Project 2-Simran (and Yufei, Ju hee, and Lauren)

For our project, we want to create an environment that relates to sustainability on campus. If someone passes by trash without picking it up, we wanted to challenge what happens in the “real world” where there are no consequences. In our alternate reality, we hope to have negative feedback so that the user/recipient is transformed, translating into different actions/reactions in the real world. We hope to use a Gazelle article written about recycling on campus to inform our interaction design.

Some initial questions: how campus focused should it be? Should we create an environment that is a realistic representation of campus? Do we make the campus environment more abstract? When designing virtual reality experiences, how do we provide feedback to the user when they have reached the edges of our world? How should the trash respond when someone walks past? Do they rise and float in the user’s field of view? Is there some sort of angry sound that increases with time? What feedback is provided if the user puts the trash in the wrong compartment (plastic vs paper vs general like the campus receptacles)?

Our group’s storyboard sketched by the talented Lauren!

From this initial concept, we decided to just start building the piece in Unity to see what we are capable of accomplishing in a relatively short amount of time. We split up the work: Lauren and I will do the interactions and Ju hee and Yufei will build the environment.

After the first weekend, we had an environment built with a skybox and some objects. As a team, we’ve decided to change directions in terms of the environment…we want to build an abstract version of the campus. This will delay things as we only have a week left and the environment will take at least three days to build, but I think it’ll be worth it in the long run. I’d rather have less complex interactions and a more meaningful environment at the end of the day. Since Lauren has a very strong vision of what she wants the environment to look like, we shall separate the tasks differently. She and Ju hee will do the environment and Yufei and I will try to implement the interactions.

Here is the lovely environment that Lauren and Ju hee have built. It looks very much like campus! Yufei and I have just integrated the Vive and SteamVR system into the environment and are looking around the space. I wish we would have integrated it earlier as there are a few scale issues and the space is very very large, things that can only be seen through the headset. We shall have to implement a teleport system and rescale objects.

Yufei is working on the teleport system. SteamVR 2.0 makes it quite simple to add teleport! We just needed to add the ‘teleporting’ prefab and the teleport points. One thing we are struggling with is the level of the teleport system. It needs to be at player arm level and we’ve tried various combinations of levels of ground, player, and teleport points, but when we make it the same level, the player teleports lower for some reason. For now, we shall place the teleport points slightly above.

Yufei made the system into a teleport area rather than points. The arc distance of the raycast seems to be something we need to play around with to match a comfortable level for the player’s arms. For now we have made it 10 which makes it easy to teleport, but difficult to teleport to a close location.

We have spent a lot of time setting up our base stations unfortunately. Additionally, whenever we look at the environment through the headset and move our heads, the buildings seem to flicker in and out and sometimes disappear completely. A forum search reveals that we need to adjust the clipping plane which apparently means the region of interest that is the visible scene. We have adjusted the near and far parameters to 2 and 2000 respectively and that seems to work just fine! Additionally, the textures on the grass and floor seem very pixelly and stretched out so I’ve increased the tiling on their shaders.

tiling of campus ground

Time to implement interactions on the trash! I’ve added the Throwable and Interactable scripts to all the objects. For now, there are cereal boxes, toilet paper rolls, wine bottles, cans from food, and water bottles. Yufei and I decided to delete the toilet paper rolls as why would one throw those away and delete the wine bottles as they have liquid in them and one cannot recycle glass on campus except in a few places. We also deleted the cans as one can only recycle metal in the dorms and we wanted it to feel like waste disposal when walking around campus. We did add a chip bag as we wanted something to go into the waste bin rather than one of the recycling bins.

Speaking of the bins, I’ve added labels to them. At first I used the UI text, but that made it be seen through all objects and it was quite blurry. To rectify the blurriness, I increased the font size substantially so that it was now bigger than the character size and I reduced the scale of the text object to the size that I wanted the text. Another forum search revealed that because it was the UI text, it could be seen through everything, so I have changed to the textmeshpro Text and the problems seem to be fixed.

fixing blurry text

I am testing the objects to see if they can be picked up but they are quite far from the player since they are on the ground. Yufei and I are continually playing around with the ground, teleport, and player levels to find something that works, but nothing seems to. We’ve tried putting the ground and player at 0 like it says to do online, but when we also add the teleport in, the teleport level seems to change a lot. I shall make the objects float for now as we are running out of time.

I am struggling with finding the right attachment settings on the scripts. Additionally, our binding UI does not work, so we seem stuck with the default binding of having an object picked up with the inner button on the controller. The object still doesn’t seem to be picking up.

I don’t know what I’ve done differently, but I can pick up one of the objects now. However, it doesn’t seem to stay in my hand so I’ve only nudged it. It acts like a projectile so it takes whatever velocity my hand gives it in the direction of the nudge. Not good!

Yufei has been working on the objects and says that we need the objects to have gravity for us to be able to throw them. She has also found the sound files. The problem is should we just place the trash on tables. I’ll play around with it and see how it looks…after all, it can just be like the D2 tables I guess. It doesn’t look that bad honestly, but Max comes to save the day by helping us find the right combination of ground, teleport, and player. Also, it seems like our room setup was incorrect which is why it was difficult to reach the floor. Either way, the system seems to work a lot better now and I also feel less nauseous now when testing since it feels more natural. I still need to find an arc distance that works. 7 seems best for now. I have also kept the tables as they are reminiscent of D2, but moved the objects so that they are scattered on the floor.

For the bins’ interaction, I planned to add tags to the objects and the bin. If the tags matched, if they were both ‘plastic,’ then it would be correct. I added the test for this condition inside the loop of the target hit effect script on all the objects, not realizing that the loop just checks for the target collider, not the possibility of the non-target collider. I modified the script to add two public wrong colliders for the other two bins. If it hits the target, I want the correct sound to play and the object to be destroyed upon collision with the bin. If it hits the wrong one, the incorrect sound should play and a message should pop up saying: “This object should be placed in the bin marked: “ + the tag of the object. Thus, for the chip bag, water bottle, and cereal box, their target collider is the bin they should be placed in and the wrong colliders are the two remaining bins.

adding ‘wrong colliders’
Settings for target hit effect script

I’ve added two audio sources for the incorrect and correct sounds. However, I keep getting an error in the debug log about there being more than 2 audio listeners. The culprit is strangely the player prefab? It has one attached as expected to the VR camera but it has another one which is strange because it’s a prefab and a Unity scene can only have one audio listener. I delete the one not on the camera and hope for the best. Now my sound works!

Now, the sounds work, but the destroy on collision and the message does not. There is a boolean variable for the destroy already on the script but it doesn’t seem to be working. Perhaps since I modified it? I just make my own destroy method in the script and the problem seems to be resolved. I also need to adjust the level of the bins to be more natural.

I’ve also fixed the cat animation so that it goes from idle A to idle B to walk.

The environment still feels quite big. After you pick up an object, it’s such a chore to walk all the way to the bin on the other side. I’m going to play around with the position of everything in the environment to make it easier to move around. Additionally, I want to make it obvious that one should pick up the trash by featuring the bins quite prominently in the scene, as I don’t want to ruin the feel of the piece with instructions.

It’s now 1 am Sunday night, so I’m off to bed. But hopefully someone in my group or I can work on this Monday morning to implement the nice to haves:

  • if you place something in the wrong bin, a message could pop up saying which bin to put it in, so it’s more instructional in nature.
  • Having the trash have an emission or make a sound if you are within a certain distance from them
  • Having some sort of reaction if you pass this distance without picking it up
  • Having more trash and more complex trash

I am working on the message now, but seem to have issues with positioning the UI and not concatenating the tag to the message. If I can’t get it before class, I shall simply delete it.

Development Blog [Zenboo]

Mar 3

Our group: Vivian, Adham, Cassie, Nico

We started off with some brainstorming for our interactions and actions:

Initial Ideas:

  • Throwing crumpled paper into a basket
    • Implement points based on how far back you are → makes you move around
    • Obstacles (desk, etc.)
    • Crumpling paper
    • Classroom, library
  • Putting food onto tray- cafeteria
  • Washing face
  • Taking care of plants
    • Zen
    • If you cut the plants they just float around
    • Twisting knob motion to speed up time → plants grow, lighting changes
  • Drawing
  • Slingshot
  • Flipping coin into fountain
    • Something could pop out, you have to catch it

After deciding on the plant idea we enjoyed, we decided to go more into details:

Taking care of plants:

  • Time
    • Lighting changes
    • Sun/moon
    • Plant growth
  • Environment ideas:
    • Dorm room
    • Windowsill
    • Small cottage
    • Outside garden, fence
  • Interaction
    • Watering
    • Cutting
    • Picking fruit/flowers
    • Growing bamboo

With a solid idea in mind, we went ahead and designed our storyboard:

–Step 1–

Clump of bamboo in front of you

To your side: tree stump with watering can + cutting tool

Surrounding mountains and other bamboo

You’re inside a circle of rocks

Butterflies are flying around

It’s golden hour

–Step 2–

You have the water picked up

Water is gone from stump

–Step 3–

Bamboo is taller

–Step 4–

Replace water with axe

Now the water is back on the stump and the axe is gone

–Step 5–

Show the particles of the bamboo disappearing

–Step 6–

Now an empty spot of bamboo

Our storyboard:

Mar 6

 After our group roles were self-assigned, of which I got the responsibility of scripting, I thought it would be important to start as soon as possible. Since I knew very little about C#, Unity, and scripting, I got to practice immediately.

The first goal for the day was to play around with some scripts from SteamVR and the online unity manuals. The first script I created was heavily dependent on what I found online. I assembled a script that caused an object to turn into a new prefab once it collided with a sphere. In order to make this functional I had to make sure the sphere and the cube both had rigid bodies and colliders. I also made sure that I could throw the sphere, by using the components from SteamVR, so that I could pick it up by using the back button of the remote. The prefab the sphere turned into was a cup from another asset pack. This was quite a hilarious scene but very beneficial because it taught me how to identify a specific prefab to use as a transformation and to prevent the same effect when touching any other named object.

The next project was to a script that would imitate the effect of bamboo disappearing when hit with a sickle. I used a capsule in place of the sickle and I made the movement occur by placing a public variable. The movement could be in any direction, it would just add that amount to the current position. The cylinder I attached this script to needed to have a rigid body and a collider in order have effect. I also made sure gravity was activated because whenever I hit it with the capsule I wanted it to fall back down to the floor. The large issue I found with this set up was that the cylinder could fall out of reach really easily and there was no way of reobtaining it.

Realizing how easy it was to make effect occur on object I brought in several prefabs for similar experiments. I brought in a sickle and stacked bamboo cups (to imitate a bamboo shoot). The script that I created this time was a lot simpler. Whenever the sickle made collision with the bamboo it would simply destroy the object. This is very effective it making it look like the shoot was being shopped at different areas. If many cups were stacked then I believe it would be quite entertaining to see all of the fall down or have some separate effect. This could be played with a bit to obtain the desired result, I will have to discuss with the group a bit. Overall, today I have found a way to make tools able to be picked up and come in contact with bamboo shoots that could either be blown away or deleted.

Mar 8

Today I started off by recreating a really crude representation of the environment. I did this so that I could start writing scripts that would be realistic of the area that the object will be placed in at the end. This are included a table, the 2 tools to be used (sickle and water can), and a patch of soil with bamboo.

I worked relentlessly on the script responsible for the function of the watering can. My goal was to make a raycast that would detect the bamboo and then return then return it. I also wanted to make it so that the object knew when it was being poured and when it was not, this depended on the tilting of the object. Past a certain z rotation, as could be seen in the scene, it looked correct to be pouring water. I could not get the raycast to work (couldn’t even see the debug line that was supposed to show up) but I succeeded in making the program know when it was pouring.

In the end I was very frustrated because the line wasn’t working and I didn’t know how to make the tilting of the can have any function. I placed a child object Particle System on the can when I thought the water should come from and then left the work for another day.

Mar 9

Today I work specifically on three scripts that I thought would be quite vital for the comfort and enjoyment of the final product. These three were: one that got the bamboo to be cut by the sickle, one that brought items back into range if thrown too far, and one that was responsible for the can’s particle system and detection of the bamboo.

I started off with the bamboo script. I ended up keeping it quite simple and basing it off a previous scrip that I talked about a few days ago. I made it so that the object that could cut the bamboo was not definite. A certain name for the cutting object could be listed.

I noticed a large issue with the script when I was trying to hit the bamboo. Only when the sickle was dropped on the bamboo would it make it disappear, this is because there were some problems with the interaction script from SteamVR. The Attachment Flags had to be altered so that the sickle could not clip through items. This also made the entire system a bit more realistic.

Next, I worked on the script to bring the items back from a distance. This took a lot of time but in the end I found out that you had to reset the rotation and the position of the item before bringing it back or else it would fly away. The final product looked very good. It would just drop back on the table once thrown too far. I also made it so that the distance it was being thrown and teleported back from could be altered. I found 2 to be good in the z and x direction. I attached this script to both the can and the sickle.

Finally, I tried working on the watering raycast issue but I ran out of time. I found out why the raycast debug line wasn’t seen though. I noticed that you could only see it in the scene editor. It was pointing the wrong way so I changed its direction (in a quite annoying method) and moved it up so that it aligned well with the nozzle (this required some testing since the scales are so messed up).

So much research, video watching, and web scrolling was required to get even here. It makes the final products feel so good when they work…

Mar 11

I noticed that it was difficult to know exactly when the watering can was pointing directly at the bamboo (causing it to grow) so I decided to add a pointer of sorts. Initially I tried to work with line renderers but this very difficult. It required the position to be continuously updated and the hit position (or end position of the line) could not be infinity. After not resolving the issue I found an extremely solution simply using a cube. I created a long thin cube oriented to way the raycast was and colored it orange just to make it more appealing and fitting with the environment.

Next, I worked heavily on the growing of the bamboo. I found the growing-flower script from SteamVR and worked off that. I first made two different kinds of bamboo shoots, bambooOG and bamboo, which essentially are the stump the bamboo will grow out of and the bamboo that actually grows out. I did this because I noticed it would make the entire system easier. I could make solely the bamboo vulnerable to the sickle while keeping the bambooOG permanent. I added the growing script from SteamVR to the bambooOG.

After trying to grow the bamboo I noticed 2 large problems. The first is that the bamboo would spawn way too fast because as long as the can was pointing at the bambooOG the growth would occur (so with every update). In order to resolve this I set a counter that would increase with every frame till 60 and then restart along with running the script. This meant that if the update occurred 60 times a second, then the bamboo would grow 1 per second. Next to tackle was the issue of the bamboo staying on top of each other after growing (and pushing the top ones up). The bamboo did not want to go any higher than 2 shoots tall, they continuously knocked the top one away. This occurred regardless of the position where you spawned the bamboo.

While doing trial and error with spawning positions and checking/unchecking all the boxes in all the scripts attached to the bamboo, I came across something wonderfully beautifully. You can lock the position and rotation of the spawned bamboo shoots by altering their prefab’s rigid body. What I tried first was freezing the y position but then I realized how that was incorrect (all the bamboo would just fly sideways. Next I increased the mass (for more stability and bounciness) and locked the x and z positions. I came up with a wonderfully appealing result. The bamboo would grow in an amazing way that looked like the individual segments were balancing on each other. Even better was the fact that the bamboos could be interacted with by the can (spinning them and bouncing them up and down). Cutting them down with the sickle was even more relieving too because you could see them quickly destroy (without any effect yet though).

Naturally, I could have locked the rotations and it would really look like the bamboo was growing straight up but I didn’t think the results was nearly as appealing. With this constant balancing, the used was way more pleased (user tested with Max) and wanted to keep making them grow because it was partially random too. I also realized that the used wanted to keep growing the shoots endlessly so I decided to attach segment to the bamboo script that would release the freezing of the rigidbody when a certain height was reached (public int). The final step would be perhaps to make the bamboo disappear after a while once their height of x position was at or lower than ground level (to prevent cluttering).

Mar 12

Today I made finishing touches to the coding. The first thing I worked on was making sure that the segments at the top of the bamboo sprout would fall off after a desired height and eventually disappear after some time on the ground. I did this by making two functions, one that took off all the restraints at a desired height (before when I did this there were still some problems) and another that waited a period of time to delete it.  The first function was relatively simple but the second one required a counter. I tried using WaitForSeconds() but apparently that can only be use in a enumerator. I suspected update to run roughly 60 times a second so I made a counter based off that.

The next part I worked on was cleaning up the code of the teleporting-an-item-back script. I noticed that this code was not originally very lenient on placement. The items had to be close to the origin of the environment. I fixed this simply by changing some of the math operations. In the end everything worked wonderfully! The next step is to combine this with the environment!

Mar 17

Today was the final day. We all grouped together to make sure everything was working and to make sure that the last day of project development could go successfully. When I came in there were still some issues with the colliding of the water and the execution of the growing commanded. After three of us worked on the code for a while we finally got it to work. The large issue had to do with layers. In the end the terrain had to be placed in a different layer along with the can because the water particle system was constantly colliding with it. We go the physics and action to work efficiently. The environment looked good and the objects worked well.

Interaction Pet

There was this quite popular game that came out in 2009 for the playstation called EyePet that my sisters and I used to play a lot. Its a different game from what we usually play in first shooter games, and there is no specific linear story. You must set up the camera in the beginning, and start the game, then a reflection of where you are sitting gets displayed on the screen.

An egg pops up and where interaction with the virtual pet begins and you are able to caress the egg till it hatches and meet your new pet.

The game has a variety of options and different interactions the player (or players) and the pet can do. I don’t usually like these types of games, but for some reason the make up of the virtual pet was very well done, and it also includes facial expressions and reactions to things add to the realistic feeling it has when playing.

Interaction

A few years ago, a kickstarter showed up for a game called Superhot. It had a mechanic which I’ve never seen before, which I really liked a thought it to be really interesting. The game is a shooter where you have to fight through different set stages, but time only moves when you do. So you can asses the situation without moving and then calculate your moves in order to not die and win the round, however if you look around then the bullets continue flying towards you. It started off as a normal computer game, however it came out around the time VR became popular and the game was perfect for VR, so it incorporated VR too.

My Favorite Interaction(s)

Actually there are two interactions I would like to talk about. The first one is a website (showed by Craig in all his classes) that provides an interactive experience. As users access the website via mobile devices, they would be able to make their own paperplane, stamp on it and throw it away. Users could also catch a paperplane, unfold it, see from the stamps where this plane had been, and stamp on it. The design of this experience is quite simple and intuitive. The instructions are quite clear. What users have to do on their mobile devices really resembles what they really have to do if they are making their own paperplane.

https://paperplanes.world/

The other interaction is from a short video I saw, between a user and a piggy bank. The piggy bank looks like a card box. As the user put the coin on the plate and press it, a cat will sneak its claw out from the box and ‘steal’ the coin away. The interaction is quite simple, but it tells a vivid story of the coin being stolen by a greedy cat.