Final Project Documentation

Escape Room: Watching

Documented by: Keyin

Group Member: Ben, Chris

PROJECT DESCRIPTION

— This project is basically an escape room designed for Google Cardboard environment. The game begins in a hospital and our protagonist will be restricted on a wheelchair while moving around and seeking clues. As the user explores our world, he/she is supposed to feel the anomalies and get into our story that it’s essentially an AI/machine-dominated world. Our character, as a human, is only a research object and is monitored all the time. We have a few designs based on this setting which I’ll explain more in the next session.

— But also, since we didn’t describe the background obviously by words, users are welcome to relate different things during in-person experience and we are also open to other possible interpretations if any. I believe after polishing up our indicative designs, different understandings can also add to the depth of our work.

— Based on the story we have in mind, we go with the psychological horror style to build the space. There’s no screams or ghost faces to scare people. But we do have multiple sound effects to create the skeptical and tense atmosphere. To meet the conditions of using a Google Cardboard, the main interaction in our project is only clicking as well as long clicking, the former for interaction and the latter for movement. The wheelchair setting was also made to fit how Google cardboard works, because we were intended to let the user sit on a chair and use Google Cardboard to play. At the same time, the wheelchair restriction can give the user different experience on moving around the space and pushing or interacting with objects.

PROCESS & IMPLEMENTATION

Storyboard

— When building the space, we made a basic hospital structure with a few sections. At first, our mood board chose some horror style, but later we went with psychological horror and tried to create a creepy clean style. That’s why we mostly used white, grey and red. As we were searching for assets, we didn’t find any that was very suitable. So we just chose a zombie hospital asset as it has a complete set of hospital stuff. And we set the light in a sense that it looks like psychological horror.

Top View of Space Layout
Beginning Spot
Ending Spot

In order to construct our story, we’ve made effort on a few designs.

1. Player Setting

— The player is sitting on the wheelchair where he can only rotate his head to look around or move the wheelchair to move around. The movement is controller by character controller. When he moves, there will also be animation of his hands moving as he’s pushing the wheels. The angle the user can look up and down is also limited (delta y from -90 to 90) so that it composes a reasonable field of view for the person. The speed of the movement is also set as a relatively low value considering the situation, and it also turned out appropriate during the play testing.

Player Model
Player Animation

2. Scenes/Objects Consistency

— To match the body model style, the clue photos of the character are also made in the similar low poly style. The elements of red circles and red rectangles are also applied in photo collection interface and the ending scene.

— The photos were made as if taken by CCTV. If you notice the dates on them, you may realize all of the four photos are about the same person, our protagonist, from young to old. Connected with the space the player is locked in, we are trying to convey that the player is undergoing monitor the whole time which may raise questions to the player. Who stands behind the monitor? What’s the relationship between the space and the player? What is he doing here or who locks him here for purposes? And the player could possibly have some answers by the end of the game.

3. Sound Effects Design

— We specially included some sound effects to make our scenes more expressive and immersive and also gives the player some motivation to get out instead of only idling in the space.

— Besides the basic wheelchair sound effect, there’re sounds only played once, sounds within a range of area, and sounds that will be triggered if the user enters certain space. The broadcasting sound at the beginning is cut off abruptly. The combination of unknown footsteps and baby cries near the mortuary is meant to create some tension and indicate something undesired may happen. There’s also one moment when the sound of moving beds is left-panned to make the user feel something is on the left. But when he steps out of the trigger area, the sound will stop suddenly as if what he heard is only his illusion. Generally the sound effects are designed to strengthen our “storyness” as well as improving the user experience.

Sound Effects Demo

4. Ending Scene

— After the player gather all of four photos and notice the dates on them, he could open the door by connecting all the dates as the password. The game is ended at that moment. However, we connect the door animation with another ending scene to make our story more complete.

— Instead of the first perspective, this ending scene is a third perspective from a CCTV screen. There also includes the audio of computer talking which can help with our illustration of the story idea. It echos with the previous four photos which are taken by CCTV and indicates that the player is still under monitor. What we want to convey is that it seems the player has escaped successfully. But does he really escape?

Keypad + Door Animation + Ending Scene

5. Other details

— We’ve turned the player to face the album on the bed at the very beginning of the game because it’s an important clue in the whole process. We also have a glowing effect on the album to highlight it. By the way, if the player doesn’t interact with the album at the beginning, he’s still able to go through the process and whenever he comes back, he can get the clue that there’re four photos in total. And although we refer to it as an album for convenience, in the story it’s more like some archives or records about our character.

Important clue: Album

— At the keypad interface, we’ve provided an indication that there’re five digits in total if the user clicks “?” at the right bottom. The player could thus more easily think of the highlighted dates which are right five digits in total on all the photos.

Keypad

EVALUATION

— Generally I think we did a great job in composing all the stuff together to build the scenes and making effort on constructing a relative solid story background. Meanwhile it’s also very playable and can make sense to people who play it. In terms of Google Cardboard environment, it can also fit very well and the use of Google Cardboard would definitely be a plus to our project.

— The only thing is that due to time constraint, we did not add intuitive word/visual descriptions of our story. On the one hand, it can be a style that we leave some space for users to imagine the whole story by themselves depending on what they get. On the other hand, for users who are eager to know what the exact story is, what we’ve presented may not be enough for them. If we have more time, we could add some introductory animation at the beginning or conclusive words in the end to reveal what the whole story is like. Also we could add more ways of interaction the user can experience. One last little thing is that in the ending scene, we forgot to show time at the corner, because a CCTV monitor should always record time.

Watching (Escape Room)

For our final project, we ultimately decided on creating an escape room. In the beginning, we were undecided in the identity and feeling that we were trying to convey and we looked at multiple different assets. After some discussion, we agreed to create an escape room based off an eerily creepy hospital room. We wanted the hospital room to resemble a mental asylum and that you the user, would wake up in. Additionally, we wanted to create an escape room based off of someone in a wheelchair and the disabilities you would face. This also added into the creepy feel because you know that something is wrong with the character. It also adds a sense of curiosity since you do not know how you became disabled. This adds into the entire narrative because you want to escape the room but also find out who you are and what happened to you.

Figure 1: Character on wheelchair

The space is divided into a few different areas. Firstly, there is the ward in which you wake up in. There’s a bed next you with an album, a cart blocking your path, a chair, and a sink.

Figure 2: Starting ward

Once you exit, there’s a hallway that connects to the rest of the space.

Figure 3: Hallway view

From there onwards, the user can explore the space and there’s a room on the right side with a table.

Figure 4: Room

If we go forwards from the hallway (refer to figure 3), we enter the operating room. There’s a bunch of miscellaneous objects, a recycle bin and surgical equipment.

Figure 5: Operating room

If you turn left, there’s another hallway and it leads to another part of the hospital. If you turn left from that hallway, there is another ward. In this ward, there’s two beds and a drawer.

Figure 6: Ward with two beds

On the right side from the hallway, there’s an empty room with a box.

Figure 7: Empty room

The last part of the hospital is the exit where there is a large metal door and a broken screen.

Figure 8: Exit

The objective of this escape room (other than to escape the room) is to piece together your past. To create this “storyness” we threw the user directly into a ward with little to no backstory. On the bed next to where you wake up, you notice an album and upon clicking, it displays an image that showcases four missing images (Refer to figure 2). The main mode of interaction is a click and a long click. We designed this with some restrictions in mind: this was designed for a Google Cardboard. This meant that we only had one click to interact or move. The process of this escape room requires one to click certain objects and explore via long click. In figure 9, the user would move the crosshair over the album and click to show an image. From this, the user would understand that the objective was to find photos.

Figure 9: Clues

The user then goes around the hospital looking for clues.

Figure 10: Clue 1
Figure 11: Clue 2
Figure 12: Clue 3
Figure 13: Clue 4

On each of the photos, there’s a number on them. These are used in the end when you have to enter a password on the keypad near the exit.

Figure 14: Exit and keypad

Once you enter the correct password into the keypad, the doors slide open and a cut scene plays.

Figure 15: Beginning of cut scene

Through this, we hoped to create an environment that resembled an escape room but in the context of someone trying to leave an abandoned hospital. The motivation for the user to continue exploring to figure out the character’s past. The interactions are simply clicks that would display an image or clicks to enter in the password. Furthermore, we wanted to paint a picture of a futuristic world in which our actions are being watched (hence the final scene with the security camera) and humans were research subjects. However, this is all up to the user’s imagination and how they perceive this.P

The brainstorming process was quite straightforward when we were creating it. The bulk of the planning process was coming up with the idea and what type of feeling we were trying to convey. We had various different types of hospitals (Figure 16), but we felt we wanted one that was creepy but not gory.

Figure 17: Different hospitals

While we were play testing (in both the actual and the paper prototype), we received lots of helpful feedback. This included adding a motivation to escape and highlighting the interactions. We tried adding a motivation by discovering your past. The album in the beginning was also to indicate what the user should look for in the game. To increase immersion, we added audio that simulated an eery feeling through baby cries, rolling wheels, distorted radio sounds, and whirling of surgical tools.

Figure 18: Paper prototype

Keyin was the creative genius behind most of the design decisions while Ben and I provided our thoughts and feedback. From the beginning, we needed implement a first person view and we decided that we wanted the user to be on a wheelchair. To implement this, we combined the wheelchair asset with the character model (the character looks like a zombie). We then added a character controller script that had the ray-cast, first person camera view, and interactions. If the player is close to an interact-able object and clicks it while the ray-cast is hovering it, an image will display.

I spent the bulk of my time animating the door and writing the script to open the door. I animated it via Unity and incorporated a simple script to test it out in the beginning (press space to open door). These largely adhered to the tutorials that Sarah posted. Once I made sure the animation was working, I added the keypad via GUI. I followed a tutorial on Youtube to complete this part: https://www.youtube.com/watch?v=ne41wItQmdo.
The script can be described as follows: when the user approaches the box collider of the door, it tells the user that if the user clicks one more time, a keypad will appear (this is done via onTriggerEnter and onTriggerExit along with boolean values).

If the user clicks, it opens the keypad (clicking turns a boolean variable keypad to true). The user can then hover and click on the buttons. The password is set in the code (“35296”) and there is a variable called input that is initially “‘. When a user presses a button, input = input + “x” with “x” being what number they press. Once the input is equal to the password, the door opens (another boolean variable is set to true). Pressing the “?” button will prompt a GUI interface to show writing “Enter a 5 digit password.” Pressing the “x” will turn the keypad boolean variable to false, closing it, and allowing the user to move again.

Originally, I had an issue where the user would not be able to click on the keypad because it followed the user’s camera. I resolved this by this line of code:
“Cursor.lockState = CursorLockMode.None;
Cursor.visible = true;”

This allows the user to move the cursor without the camera moving.

Sarah raised an issue where the cursor detracts from the immersion of the story since it wasn’t similar to the crosshair in the game. I tried to resolve this by changing the cursor to be the same as the crosshair, but that wasn’t shown in the build.

Another issue I faced was a nullExceptionError but that was due to me not capitalizing the “s” in void Start().

When animating the door, I had an issue in which the door would slide over the wall. This was resolved by adding another wall over the first wall so that the door would slide in between those two walls.

I felt like we definitely met our expectations in creating an escape room that had a “storyness” to it. I felt like there was also a motivation for the player to explore and try to escape. I am very proud of the work that we produced and I had a lot of fun making this. I was so absorbed in making this that I lost track of time and was 45 minutes late to a class. However, there are definitely some things that we could improve on.

Firstly, in terms of performance, there are some issues with the door. When I was coding it, I used onTriggerEnter and onTriggerExit to create this interaction. This would sometimes lag the user when they entered the box collider. Additionally, it was a bit clunky since you need to click to move and it would interfere with it. If I had more time, I would love to polish this up.

Secondly, in terms of aesthetics, there are some things I would like to improve. The keypad on the door does not really fit the overall feel of the hospital. Additionally, the text is quite small and the size of the keypad should be bigger. I tried to fix it before the deadline, but the changes weren’t big enough. The text on the keypad should also be a bit bigger.

Thirdly, in terms of “storyness”, I felt like this could be improved upon if we had more time. The objective was to explore the hospital, try to find out about your past, and escape, however there wasn’t much background information given to the user. Additionally, when the user went around looking at photos, it doesn’t add much information. Perhaps, some text at the back at the photo would aid in telling the story.

I also felt like there wasn’t much clarity and some controls aren’t that intuitive. One thing that was difficult was identifying what were clues and what we could interact with. We also didn’t know what we were looking for. Adding clarity such as when you hover over an object you can interact with, it lights up, would help.

Lastly, in terms of interactivity, there is a lot more we could improve upon. As of now, the only interaction are with objects that act as clues or the keypad. Adding different interactions such as being able to hold objects and using this interaction as a key component of finding clues would make the entire experience more interactive (and probably more fun).

There are a bunch of things we could spend time on improving, but for the given time frame I am extremely happy with the work that we created. In the summer if time permits, I intend to work on it. I’ve shared this with many of my friends and I’m asking them for feedback. Overall, I loved what we created and I hope to continue working on game projects in the future.

Project 2 – Documentation

For the second project I worked with Ellen, Chris and Luize. The inspiration for this project was loosely on interactions that take place in our daily lives, as we wanted to create interactions in a more real-life environment to make the user’s experience more natural. Then we started brainstorming on different scenarios of actions people do in their everyday lives, and ended up making a kitchen room because everyone is familiar with the place. We thought about various interactions that might occur while the character is in the kitchen and decided to make the switch as our main interaction. The user understands that the place is very unusual after playing with switches, since the first switch responsible for turning on/off the light and the other one is responsible for making the place alternate by causing fireflies coming from plants that emit overtime. This gives the user the idea that switches can do something more than just turning the light on or off.

Image 1 – the switch asset

Our initial idea was to create a bedroom where the user wakes up by turning the alarm off, then turns on the light and goes to the kitchen to make the coffee, but we decided to use the light switch as the only interaction because of some limitation of available assets and some other circumstances.

I worked on setting up the room and adding decorations and furniture. Adding walls, furniture and some other staff was very time consuming, however, I really enjoyed the process of working on it. When I finished with setting up the room, Luize updated the scene by adding the physics and the ceiling with the indoor light. Afterwards Chris worked on implementing the interaction with the switch, turning on/off the light, and Ellen worked on adding effects to make the place alternate such as fireworks and fireflies.

We added small lights on the switches to draw user’s attention to the switch area (Image 9). That way we tried to make things more clear and understandable, we also added some text such as “press E to turn off the switch” so when the user enters the switch area the text appears to help the user understand how to navigate and how to turn on/off the switch.

Image 2 – environment setup
Image 3 – fireworks
Image 4 – fireflies
Image 5 – top view

Image 6 – dining area
Image 7 – kitchen area – light on
Image 8 – kitchen area – light off
Image 9 – fireflies with light off
Image 10 – fireworks

The only action that the user does is walking around the room and turning on and off the switch. We wanted the user to start with just looking around the room and then realizing that there are two light switches. The reason why we wanted the user to start from being next to first switch next to the window is in order for him to be able to see one of the effects that we added, fireworks (Image 10). After seeing the fireworks, user can turn on/off the first switch which is right next to the window. When turning off the switch, user realizes that not only did the lights go off but the fireworks are also gone. When the user moves around after being alone in the dark room, he sees another switch located on the other side of the room with a little light on it, by inviting the user to go the next switch area. When the user enters the second switch area, the the text-box appears on the screen with instructions on what to do next. After being in the second switch area, when the user tries to turn on the lights, the fireflies start coming from the plants and fulfilling the room (Image 9). We also tried to download the proper switch assets, in which the user would see how the switch changes while turning it on or off, to satisfy users expectations. (Image 1).

Initially we wanted the user to use VR controllers to play with switches, however, since we didn’t have access to VR headset, we had to use the keyboard. The reason we have chosen to add text-box with instructions is because it will make the user feel comfortable navigating and make the next steps easier to understand.

Although there have been some challenges that have arisen throughout the project, such as not finding proper assets, lighting difficulties and putting up the furniture together, I am extremely pleased with the overall work we have done in such a challenging situation. The end product came very close to what we foresaw for our project first, if feels natural, but unusual and surprising at the same time. And I think we have managed to achieve the goal of an alternate world, because the results of the interactions are very different and extraordinary from our normal everyday life.

Project 2 Documentation | Boxing with Ares

[Updated March 28 2020] Added to Documentation Category

Project Description

Boxing with Ares is an immersive experience in Unity that invites players to a dark and eerie world where they will have to navigate through their internal conflict of peace and war, of hope and sorrow. An inviting big red punching bag placed in the center of a gloom, obscure, and desolate ground that is actively contrast by a sky filled with grids of smaller punching bags seemingly blending with bloody clouds streaks: what could go wrong, what other ominous thing that could happen here?

Unknowingly to the players, dozens of doves fly out from the punching bag whenever it is punched. That is simply not how punching a bag works in real life. The act of punching something is supposed to be a violent act: how could this make sense with such a symbol of peace, how could such two antipodes co-exist in the same world, let alone in the same interaction. Taken back by the unexpected interactions, the players then face their internal struggle of interpreting such encounters: whether to keep punching or to stop the violent act, whether to spread peace by setting the doves free or to let hope die out by chasing the doves away…

Process and Implementation

The very first step of brainstorming for this project was to come up with an everyday activity through which we would modify in accordance to the alternate world. Someone yelled let’s do boxing, I did not remember whom, but the idea was so captivating that we went full force with it. The word “magic” somehow popped up in the conversation and somehow I said “What if birds fly out of the punching bag like when they magically fly out of a magician’s hat?”. Instantaneously, something clicked: we realized that if the birds were to be doves which have long been symbolized peace, they would unexpectedly counterbalance the suggestively violent act of boxing. That they would open up so many questions revolving around peace, war, and the agency of the player, his/her internal struggle between the good and the bad. There would be a button that could be pushed to change the color of the doves flying out (this, however, quickly proved to be an idea made hastily and did not blend well with the rest of the experience).

Initially envisioned for the Vive system, we intended the interaction to be organic, analogous to that required to punch a bag in real life: the player would have to pull the trigger while holding tight to the controller (which resembles the act of clenching a fist) and accelerate their controller/hand forward towards the punching box.

The act of punching a bag. This is also the asset we found on Unity for the experience.

Also, we thought that we could envision a theater environment in which the player is given a platform to perform their internal struggle between peace and war.

This is our first sketch of the experience

However, after receiving feedback from Professor Sarah and our classmates as well as the breaking news of the coronavirus that would have a big impact on how we designed the experience, we revised and narrowed down our initial idea, specifically:

  • The interaction would only involve the action of punching the bag
  • The theater environment would be changed to a less context-based and more provocative space. We took inspiration from this scene from the Matrix, in which the environment does not guarantee any concrete clues as to where the player is currently situated, a place that is not defined by the conventionals.
  • We also took some more inspirations from this set-up. We would want some fogs in the environment, as well as smaller punching bags randomly hung from the sky, without really making any sense as to why they are there in the first place, opening up possibilities for self-interpretation and self-reflection from the player.

With the developed idea in mind, we started to work on the project. Neyva took charge of the environment, while Nhi and I worked together on the camera, the character, the interactables, and the interactions.

We decided to put the main camera on the character in such a way that the player can see his/her hands. As we could not use the Vive anymore and thus its associated in-screen controllers, being able to look down and see his/her hands provide a visual cue that interactions through the hands are possible. We however limited the angle which the players can rotate down, as we did not want the player to be able to look through the boxing man’s body. Lastly, we made the camera and the boxing man children of the First Person Controller so that they can move in tandem with the inputs from the player. This is about as far as we could implement the experience we intended to be without Vive.
We then added a script to detect the collision between the boxing man’s hands and the bag. We could not rely on the default Collider alone because we had to check if the collision is indeed caused by the punching action, not by accidental touches caused by proximity to the bag. After detecting the collision, we added a script containing a class Bird to generate birds flying out of the bag. The birds are generated with random positions, random angles, random velocities (using Rigidbody and AddForce function) and animation speed correspondingly (the faster the velocity, the faster the animation – the flapping wings animation).
The bird prefab upclose. We played around with its color and ended up to choose a white-grey-ish tone that suited well with the monotonous tone of environment while not overpowering the experience. The addition of the moving birds somehow provide the scene with a lot of contrast, which is predominantly made up of stationary or slow-moving components.
The punching bags in the sky placed by Neyva.
The clouds in the sky placed by Neyva. Originally, they were white; however, after toying with the skybox a bit, we decided to set them to red and made the environment even more mysterious, hell-ish, dark, and cruel.
The ground fog effect (particle system) created by Neyva

Project Reflection

Overally, I am satisfied with what we were able to achieve in such a changing and challenging situation.

First of all, I can feel a sense of an alternate world being presented in the experience. From the very prominent cue of a dark, ominous sky without any light coming from the sun dotted with bloody streaks of clouds, to the less-so-stand-out desolate layout of the immediate environment with only a lone punching bag on the ground (or even a lack thereof of a definitive ground, only a featureless plane that extends and seamlessly melts into the horizon), to the omni-present ambient sound that suggests a rather unsettling tone of the environment: everything works together to transform the player into an alternate world that one might imagine but is too scared to face it himself/herself.

Moreover, having the ability to see his/her hands (or rather hands with a pair of red gloves on them) right from the very beginning of the experience immediately reminds the player of the possibilities of utilizing them for potential interactions. Apart from the smaller punching bags that are placed way too high for any conceivable interactions, the one and only thing that are in the reach of the player is the inviting big red punching box that is a few feet away. It is obvious that something, unexpected or not, will happen upon interactions between the hands and the punching bag.


One small thing to note, however, was a lack of the volumetric spotlight that shines above the punching bag. While it was functional in the Unity project, when we exported it into an executable app, the volumetric is nowhere to be seen. While it originally served to further emphasize the importance of and as an invitation to the punching bag, the lack of it in the final app did not really have a big impact on the experience as a whole: the aforementioned features are enough to act as affordances for the experience.

The end product came quite close to what we envisioned for our project. In some way, it exceeds my expectation, it feels both more real and more alien than I could have imagined before. While the immersiveness of the medium lends itself nicely to the experience, giving the player the freedom to explore and interact, it also presented us with challenge to put things where they should be. For example, we decided to offset the punching bag quite a bit from the initial position of the player so that the player can have a grasp of the environment as a whole before delving into the interaction. This was met with positive comments from our classmates, citing it gave them a pause to think about their actions, to whether or not incite more violence by punching the bag on the already violent environment.

Agency Question

The very first thing that ensured the player with an agency in this experience is the ability to see his/her hands right from the beginning. What’s more, the hands are barely in their naked forms: rather, they are inside a pair of red boxing gloves, which imbued the player with an elevated kind of agency, the kind that comes with capabilities specific with boxing gloves. The sight of a matching red punching bag afar immediate after that inevitably welcomes the player to come closer to materialize the thoughts of actions that were triggered earlier upon seeing the boxing gloves for the first time. The satisfying sensation is derived from the ability to punch the bag (either through a mouse click which is implemented here, or with an actual forward movement of the controller while clenching a fist initially imagined for the Vive ) and see the bag responds to the action through its change of position and speed in space and time as well as auditory cues (impact sound). Moreover, more than an expected displacement of the punching bag, the player is surprised with doves flying out of the bag every single time it is punched. It is at this moment that the player realize they can not only physically influence, but they can also extend their bestowed agency on innocent doves somehow “trapped” in the bag, to decide either to set them free with a view to spreading hope outwards or to keep them inside, trying to hold on to the last bit of hope in this dark environment.

Documentation – Project 2: 3D Calculator

Project Description

Demo of final version

With the idea to make a “3D Calculator,” Ben, Keyin, Yeji and I worked together on this project. Its purpose is to imagine how the everyday activity of using a calculator could happen in a very different way with the help of VR.

Upon entering the scene, the user will find themselves in a bedroom (scene 1). In the center of the room, there are a bed and a desk, on which is a calculator, lit up by the lamp beside. Once the user clicks on the calculator, they will be taken into an alternative world (scene 2) where the calculation takes place.

Here, the user can form a formula by creating and dragging around cubes that represent operators and operands. The cubes that make up the formula remain red if not connected to the green equal sign cube; once connected, they turn blue and the result will pop out automatically.

It feels very alternate because instead of pushing buttons on a traditional calculator, the user places numbers and operators in whichever way they want to on a 3D canvas that can contain infinitely long formulas. They can actually place the cubes in specific ways to embody the logic behind the calculation, while digits on a small screen won’t be able to do so. Moreover, the technology of VR allows us to turn a electronic device into an entire environment.

Process and Implementation

Before making up our minds on 3D calculator, our team discussed a few ideas. From pet keeping, programming to barbecuing, we kind of traversed the spectrum of interaction: human-animal interaction, human-computer interaction, and human activities involving non-responsive objects.

Draft of ideas and storyboards

We finally settled on 3D calculator, as all of us agreed that the idea of being able to create and snap together virtual cubes sounded cool and futuristic. Personally, I was interested in how it could embody both the everyday-ness and the alternate-ness. Calcutor is a common device that people are very familiar with, and the action of using it takes place in a lot of scenarios in our daily life; but by reimagining calculator within a VR context, we introduce to the user a brand new interface, where they get to experience a different set of operations. These new operations might open up the door to a lot of new possibilities of how everyday activities could be done in easier ways. Specifically, with this project, we want to demonstrate that activities requiring a strong sense of logic, such as mathematical calculation, and even programming / coding, could take on a more graphic and more intuitive approach. Once users get familiar with this kind of operations, tasks may be done more efficiently.

We divided the work in developing this project into two parts: building the everyday environment, and building the core interaction with cubes. Keyin and I took on the former, so I will first elaborate on the design of the bedroom scene (scene 1) with respect to the following aspects, and then introduce briefly the calculator scene (scene 2) Ben and Yeji built.

  • Purpose
  • Identities
  • Design: furniture, room structure and lighting
  • Interactions

Purpose The question to answer before doing anything is, what is the purpose of scene 1? I expected our scene to act as a friendly starting point for the user, where they would instantly feel familiar.

Identities This required us to create a homely and realistic atmosphere for the scene to be as immersive as possible. Therefore, we decided that building a bedroom would be an easy way to achieve the purpose, as a strong feel of warmth and coziness is often attached to bedrooms, and the identities above wouldn’t be too hard to realize with appropriate design.

Furniture We started with the bed at the center of the room, which has a very realistic texture that gives it a cozy feel. A nightstand was placed right next to the bed. Then we found a desk asset with incredible details, and put it in front of the bed as the platform where the calculator would locate. Alongside the calculator, we placed some other stationery as well as a lamp on the desk in an untidy manner to reflect the casual bedroom identity. The chair came into the scene for a similar purpose. Other decorations include the curtains, a plant at the corner, and three pictures on the wall.

Room structure A significant feature of the room is that it has a staircase on one side. The idea began with me suggesting that there certainly needed to be a door in the room, but Keyin added a staircase instead, which I later found to be a better idea. If we had a door there, it would most possibly not be functional; the staircase, despite not being functional (walkable) either, opens up the space and extends the user’s perception of the house. Since there is a second floor in the house, the user wouldn’t feel isolated in an enclosed space. The large French window provides a view of sunset and serves for that purpose as well.

Lighting We chose the time of dusk as the background because it brings out a serene feel and leaves much room for us to create our own lighting. The bedroom contains two spot lights casting light from above, each having a gentle intensity and a warm yellow-ish hue. The light on the second floor is in fact brighter, so that it may leave the user imagining a brighter space in other parts of the house even though they cannot go upstairs. Not to forget that the desk lamp produces a spot light focusing on the calculator, highlighting the most important object in the room and encouraging the user to interact with it.

Interactions There aren’t many interactions in this scene, the only ones being that the user can click on the calculator or push around the chair. Later after we combined our scene with Ben and Yeji’s, Ben added the interaction of picking things up and throwing them around. We initially enabled the chair to be moved around in order to enrich the gaming aspect of the project; after other objects became movable as well, I found the entire scene a lot more interactive, realistic and engaging.

Demo of interactions in first version of scene 1

The calculator clicking interaction was made using RayCast, which basically means that as the user looks around, the center of their view remains a cursor that can be used to click on things. When the cursor is pointed at the calculator, the calculator turns blue, suggesting that it is clickable. Once it is clicked on, the user will be teleported to scene 2. From the desk lamp to the color changing, we tried to make it as intuitive as possible for the user to nevigate themselves and access all the interactions.

Scene 2 When the user clicks on the calculator, everything in the room disappears except the calculator, which remains at the same place, while a sci-fi-ish scene appears and replaces the bedroom. The calculator apparently acts as a portal from the real world to the alternate reality, and the reason that it is still present in the latter is that by clicking on it, the user can return to the bedroom scene, which is suggested by its unchanged position.

Since we compromised the project for PC platform instead of Vive, the experience of this core interaction feels a lot different from how we intended it, but the scene per se very much lives up to my expectation. (Thanks to Ben and Yeji’s hard work!) The glow-y cubes work well with the blue/black grid floor and the darkened background, creating a futuristic feel that contrasts greatly with scene 1 and presents the alternate-ness as expected.

Demo of an early version of scene 2

The interaction is not as intuitive as it would be with a VR device. It involves several keys on the keyboard and all three mouse buttons, due to the amount of operations needed in this scene. Operations including choosing an operand / operator, creating / moving / deleting a cube, as well as walking around.

Reflection/Evaluation

One thing that’s been bugging me is the amount of intuitiveness that our project provides for the user. In other words, is this way of doing calculation simpler than the regular way? How is it more appealing? The core problem is, how many old conventions are we taking into consideration, and what new conventions are we trying to build with such an interface? In this sense, I think this project stayed far away from a lot of conventions of using a calculator. Our interface does not include a digit keyboard, which would be very familiar to most people. We don’t even have all the digits and operators laid out for the user to choose from; they have to scroll their mouse wheel to find what they want instead. Moreover, placing a cube at a desired position within a 3D space appears difficult through a 2D display, because the sense of depth is very much missing.

But in the meantime, I can’t help but wonder what the project would be like in VR. Obviously, the user would have more dexterity in VR with both hands available for more instinctive movements. Probably it will come in handy and be faster than the traditional calculator once the user gets used to the operations. Like I said, the ultimate goal is to explore the possibilities for 3D interfaces to become a common operating language, which could serve for more purposes, more than just doing mathematical calculation.

Agency Question

Like Murray said, the pleasure of agency could come from double clicking on a file icon and seeing it open, or obtaining a desired output from a piece of code. In our project, creating cubes to form an equation provides the same type of pleasure, which is equivalent to pushing down buttons on a calculator and getting the correct result. I think the summit of pleasure comes when the user places the equal sign cube at the end of their formula. Whether they have a desired result or not, it is meaningful to see that result coming out and fulfilling their expectation, because it is generated from something they build. The colors of the cubes (red when incorrect or not connected, blue otherwise) as well as the grid on the floor help inspire the user to create things in their way and anticipate the result.

Boxing with Ares: Project 2 Documentation

Boxing With Ares Final

Project Description

Group Members: Nhi, Vince, Neyva

“Boxing with Ares” gets its name from the Greek god of war Ares, who represents violence. Ares was chosen over the Roman god Mars, due to the fact that Mars represents valor in war instead of the violent aspect we wanted shown in our project.

Within our project, we really wanted to use the motion of “punching” in terms of boxing. While originally this was going to be done with VR controllers which gave us feedback, we needed to scale it back to a mouse click within Unity. There was a loss in translation, but we were able to achieve the boxing animation that we wanted.

The boxing motion takes place in a dark, and bleak environment. Red clouds fill a back sky, and the character stands in a light fog. Small punching bags just beyond reach are spread throughout the sky, filling the space above the user’s head. An ominous soundtrack (which can be found here: https://www.youtube.com/watch?v=Qm-El3qztgw ) plays on loop in the background. The hellish landscape contains nothing the user can interact with except for the one large punching bag situated in the middle of an open area.

The most important part of the project is the fact that once the user punches, white doves appear out of the bag. Since the project had to have an unexpected action occur from an everyday motion, we found the idea of doves emerging from a punching bag to be one of the least expected actions to occur. The dove is meant to represent the fleeting beacon of hope within the dark and hostile environment. Furthermore, the action of hope is juxtaposed by the violent action that caused the doves to spawn from the punching bag.

The thought was to create an alternate universe in which to situate this punching bag. We did not want to create a conventional area, such as a gym or forest in which this would take place, but created our own imaginary scene.

Process and Implementation

When we first began with the project, we thought to put the punching bag in a theater space.

Large punching bag situated on a theater stage, with an audience of chairs.

Our thought process was that any kind of prop can be placed on a theater stage and it will look natural, and so we wanted to plop the large punching bag on the stage instead of using something like a gym environment. In the case of the gym environment, the emergence of the doves would then seem more nonsensical than not.

After talking to Sarah, we reached the idea of creating our own unique environment. An influence from this was the matrix scene (shown below) in which there is nothing to interact with in the space besides the necessary objects.

Matrix scene after Neo first takes the red pill to see the truth

Building from this idea, we then began to look into imaginary dreamscapes that did not fit into a conventional kind of environment.

These were some of our earlier ideas, but finally we found one image which was a huge influence in our project.

We then took elements from this image, such as the lightbulbs being replaced by the smaller punching bags, and the fog. There is also heavy black and gray which we followed as well in our landscape.

Luckily, we were able to find a punching bag/boxing asset in the Unity store for free. It came with a large and small punching bag, plus a character wearing boxing gloves. This was the perfect fit for our project.

Here you can see the 200 punching bags we placed in the sky in order to recreate the lightbulb feeling from the image above.

We then added multiple clouds that were dark red and gray in the sky.

Then within the floor, we created 2 particle systems to create the fog effect around the user. (This tutorial was the one followed: https://www.youtube.com/watch?v=DvKRGwCImJ4)

We used a sheet of textures in order to create the smooth transition within the fog, shown below.

The second particle system was a lot simpler, as it was just floating dust motes. This was done simply using the Unity Particle pack.

Lastly, as part of the main interaction, we took the character from the boxing package and mounted the camera on him. This would allow the user to feel as if it was first person. The boxing package came with the scripts and sound effects to be punched, and so from there we built a script to spawn the white doves whenever there was a detected collision between the user punching and the punching bag.

Reflection

I believe we were able to achieve an alternate reality, due to the fact that the setting was very ominous and was meant to seem like it came from a hellish dream. That, coupled with the music playing on loop and the only interaction being to punch a punching bag definitely allows the user to be in a completely new environment. I believe it was a good choice to follow the matrix scene example in which the only object the user can interact with are necessary items. The large punching bag being the only touchable object and placed all alone is a very clear indicator for the user to come and punch it, and also the hands of the player being boxing gloves is another clear affordance to go and punch the bag and see what happens.

Overall, we were able to create the project that we had envisioned originally, except for the feedback controller aspect. We thought it would have been very fun to punch in VR, and we felt the mouse click from our computer did not match this motion we had originally wanted. BUT, other than that, we were able to create the scene we had originally envisioned.

On Agency…

Originally, through the use of the punching motion through the VIVE controllers, the user was meant to feel satisfaction as seeing the punching bag move and react to their touches, along with satisfaction at seeing birds spawn from their actions. This could have been improved in our project with the mouse click being held in order to “charge a punch” and for that to have a kind of effect when punching. I also spoke above of the clear affordances in our project, as the punching bag was the only thing to interact with within the environment.

Project 2: Documentation

Project Description

The everyday activity we choose to reimagine is pressing the light switch and it happens in a homey environment which consists of the kitchen and the dining area. We limit the room size so that it could be carried out with the Vive, but due to some special circumstances we have to set it up as a regular app instead. We have envisioned what the project will be like if carried out in a VR setting and I will discuss the difference in the reflection section. To recreate the interactions triggered by pressing a light switch, we have to break the convention that the light switch is only used to control the lights. To realize the alternateness of this activity, we design it to reflect the inner world of the audience, or more precisely the character in our first-person story.

In our scene, the audience would find themselves in a room and they could only explore the space within the room. The main interactions that could be triggered are with the two switches on the wall. To make the transition to an alternate aspect of reality go smoothly, the first switch that the audience see could actually control the light, but also with some changes to the environment that comes along with the light. The second switch couldn’t be used to turn on the lights and special responses are activated only when the lights in the room are off.


With the demo below, we could be more clear about the story. After entering the game scene, the audience would find themselves in the room. The door couldn’t be opened if one tries to. Apparently the character is locked in the room for some reason (maybe doing self-quarantine). When the audience turns around and sees the window, they will see fireworks outside. But when turning off the lights in the room with the switch next to the window, they will notice the fireworks are also gone. So here we are, alone in this dark room and the cold moonlight.


However, when the audience turns around, they will see another switch with a small light glowing. Turning this switch on, the audience will find themselves surrounded by the fireflies that fly out from the plants. Now even though we couldn’t get out of the room, we could still have our own light within this tiny space.



Process and Implementation

We went through a lot of brainstorming process to get to the final idea that is presented in our project. From the beginning, our goal has been to recreate the regular event where the audience makes interactions with the household properties, and this main idea remains unchanged all along till the end. Initially, our character was set to be a sleepy person in the early morning. After having a sip of coffee that is made with a coffee machine, the lights in the room are automatically turned on, meaning that the coffee light he/she up. However, due to the limitation of available assets, we couldn’t get a satisfying model for the coffee machine and have to give up this idea.

Later on, since we couldn’t use the VR device and focus on one core interaction, we decide to choose the interaction with the switch and implement some unexpected responses triggered by pressing a switch. Finally, we decide to use two switches, one for controlling the lights, and another for making changes to the plants in the room.


  • Environment and Aesthetics

To begin with, Ganjina set up the room with furniture and decorations and I helped with the arrangement of things in the room. In the initial environment, some space is left for the audience to walk around in this room. One switch is on the wall next to the starting position of the audience so that one could easily see. Another switch is on the opposite wall, with plants nearby.

Initial environment setup

Afterwards, Luize updated the scene by choosing a suitable color tone for walls and floor. She also added the ceiling with the indoor light and more decorations in the dining area. The environment becomes warmer and cozier with yellowish lighting.


Updated room scene with ceiling and indoor lighting
  • Scripts and Visual Effects

The scripts for this project are mainly about changing the states of objects involved when pressing a switch. The switch itself consists of two parts and one part is rotated when turned on/off. Chris implemented the interaction with switch and the light changes accordingly.

I’m mainly responsible for the scripts and visual effects after the switches are triggered. Initially, we wanted to turn the plants from dead to alive to reflect how the character is feeling. But again, the available plant assets are not satisfying enough. Therefore, we changed our idea and instead have the story and fireflies mentioned in the project description section. In fact, I’m inspired by the extraordinary situation where many people in the world have to lock themselves in their house. I know how depressing and lonely it could be to spend days and nights in the limited space, not to mention with concerns about the unprecedented situation. However, I somehow have the feeling that eventually everything will come to an end. It’s not even going to a miracle, but the fate of us all. So I use the image of fireflies as the metaphor of a subtle but certain faith. The fireworks outside are to enhance the contrast before and after the light is off. The fireworks only go off when the light is on while the firefly-effect could only be triggered when the light is off.


Fireflies with the room light off
Fireworks with the room light on

For the visual effects of fireflies and fireworks, I basically play around with the ParticleSystem in Unity. The fireflies have relatively long lifetime so that they could fill the room before disappearing. To simulate the motion of fireflies, noise is added to the particle emission trail. As is shown by the bounding boxes of particles below, the size of fireflies changes varying their speed and lifetime. Their color (or glow) dims over lifetime as well.

For the fireworks, they each consist of emission trails, small sparkles (as sub emitters) along the trail, and the bursts at the end of trails. The trails and sparkles have gradient colors so that the process from birth to death of emissions becomes natural.


In addition, the audience could also open the fridge and the microwave and the mechanism is similar to flipping a switch. The only difference is that the doors rotate around an axis instead of a point for the switch.

Reflection

From my perspective, we have realized the alternateness in our project. Conventionally, people don’t expect a switch in their room to be able to set or stop the fireworks outside. Furthermore, usually we don’t see fireflies indoor so it’s also an unexpected event.

For the medium, initially we envisioned the project to be implemented with VR setup. We think it would quite different with hand gestures when pressing the switch or interacting with the microwave and fridge. I add colliders to the fireflies so that I think it would be a better experience with VR devices when fireflies fly into the character (even better if sound effect is added).

Besides, the long-lasting brainstorming process of this project is especially inspiring to me. Before the actual implementation, we went through several times of idea sharing and feedback collecting with the whole class. At the very beginning, we even didn’t have any alternate elements in our design. The three interactions that we choose are also random and irrelevant. We got the suggestion that for the entire activity to make sense, we need to have a story that links everything together. That’s when I realize the meaning and importance of coherence, and I’ve always kept it in mind throughout the project.


Agency

There are two things we have designed regarding agency. First of all, the first version of fireworks are far from the room and one has to walk close to the window to see them. We think they are not noticeable compared to the first switch from the starting position. Thus, we adjust the position of fireworks so that the player could see them at first glance. Moreover, we move the switch closer to the window so that the player could easily realize the functions of the switch.

The second detail is the small glowing light on the second switch. It only lights up in the dark so that it catches the attention of the player to press it when the light is off.


Fire Planet: Documentation

Project Description
Fire Planet is a small narrative experience that places users in the position of a firefighter in an alternate world/planet engulfed by wildfire. In this world, civilization has been reduced to living under a protective dome, with water sprinkler units protecting its outside perimeters. Placed in the midst of a catastrophe – where one of the sprinklers has malfunctioned and wild fire approaches the city – it is the user’s role to use their powers (in the form of magical projectiles) to extinguish the flames and fix the sprinkler. This activity reflects an alternate world activity as it is an everyday action for the firefighter/protector in this world, yet which is in a setting that is quite distinct from the reality we know. 

Process and Implementation
Brainstorming
The brainstorming process for this project with Will and Steven was actually quite time-consuming yet fruitful. We started by pitching any action that occurred to us, and which we thought would be interesting to explore and use in our VR game. After much deliberation, and after considering a lot of crazy options that in retrospect would have been too ambitious to successfully complete, we realized that what we were missing was deciding on an experience that would fully capitalize on the advantages and affordances of the VR medium. This realization eventually led us to our chosen concept: that of a protector/firefighter who must protect their city from flames. The action that the user would be doing in this scenario was fanning two large fans to extinguish the flames. We believed that the action of fanning something in the air would be an interesting game mechanic to use, especially in VR. In terms of what would be everyday in this world – we decided that the idea/action of extinguishing flames (and thus suggesting that in this world fire is also dangerous) would work well to suggest the user’s objective. However, the alternate-ness would then emerge through the means of putting out the fire, along with the setting itself (which is on a vast, desert-like planet with a dome-encased city in the distance). Making the decision of how we would switch this concept to a non-VR game was easy – instead of fanning the fire out, the user would now throw an orb at it. This decision was made since we realized it would be more intuitive for users to use the keyboard to shoot something, rather than to fan.

This storyboard was the result of our brainstorming session, showing the location of the user in between the wildfire and the city.

Implementation

The first step we made once we finalized this concept was to divide the different components we had to work on to carry the game out successfully. These were the components:

1. Environment

The environment was the most straightforward part, as we all had experience with it from Project #1. Steven was in charge of bringing it to life – adding the dome and the city, using a rugged landscape hinting at the alternate-ness of the environment, adding fog and then finally adding the sprinklers and the fire. Although this environment didn’t require too many components, the careful placement of them was crucial, as they were key in framing our narrative. For instance: having a wall of fire behind the water particles suggests that these flames are somehow contained to their space, and are thus safe. Placing other fires in front of the user would suggest that these are the ones that are dangerous for the city and must be extinguished. 

Rugged terrain surrounded the user
A distant city surrounded by a dome could be seen directly opposite from the fires
Opposite the city the user could see the fire wall, stopped by the sprinklers

2. Interaction 

The interaction was further divided into more parts. These involved rendering the predicted projectile path for the magic sphere as well as shooting an object in that same trajectory, having collision detection between the user’s projectiles and the wildfire, and finally having the water sprinkler reboot triggered by the user getting close to its vicinity. Out of all these, my main role was to render the predicted projectile path and enable shooting through that same path. To do this, I followed this tutorial that demonstrated how to create a line rendering script as well as a spawning script that allowed projectiles to be shot at that same trajectory. The script was also easy enough to be able to fully customize the look, location, and angle of the line, as well as to change which object would be shot, making these components easy to combine with Will and Steven’s work.

Line render showing the predicted projectile path hitting another object

3. Character 

Since we were using a first person non-VR character, we also wanted to show our character’s hands and also show a type of shooting motion that would hint at the fact that the user itself was generating these magic spheres. When we combined the project, we made sure to sync the activation of the hand animation along with the shooting of the orbs.

Throwing animation with orb

4. Combining all of these to create our story

After getting the foundational interactions done, a lot of time was also spent on tweaking the experience and adding enough information/hints for users to understand the story they were placed in the midst of, while also being able to apply the game mechanics to fulfill their objective. Due to the linearity of our experience, this was an aspect we really struggled with in the later stages of this project’s development. At first, we weren’t sure if it was obvious for the user which fires they were meant to extinguish and which were actually contained by the sprinklers. We even reached the point where we changed the story completely to the point that we took out the sprinklers entirely and the only goal was to protect a tree that got caught in fire. In the end, after much deliberation, we decided to stick to our original idea, while making small yet effective changes in the design of the game that would make the story more clear and intuitive. For this, we ended up changing our sprinkler object to one that was more flashy (and even included an animation!) and added a large, flashy cylinder surrounding it that would always indicate the user which direction they had to go.

A closer look at the water turret with the surrounding cylinder

To be more consistent on components that were separate from the environment’s objects, we matched the look and color of the cylinder with that of the line rendering. We were also very careful with our choice of narration – we didn’t want it to just sound like instructions being read on screen, we wanted the person talking to feel like another character in the story, thus building the universe they belong in. Through a carefully made script, we tried to give enough context about what was happening in the story while also tying in a lot of the components that would have otherwise seemed a bit random and misplaced (such as the water fountains). We also edited it in Audacity to create the feel as if the voice was coming from a sort of communication device – enhanced by the white noise and static we added into it. 

Reflection/Evaluation
Overall, I feel that we did achieve an alternate version of this activity, even if it was a very specific one like putting out fires by throwing a magical spell at them. As mentioned earlier, a lot of the latter part of the development process was spent ensuring that the experience offered enough affordances for players to carry out their mission. Obvious indicators, such as the turquoise cylinder and the color-matching projectile line rendering were key in establishing a relationship between the short term objective of putting out the fires by aiming at them, and the long-term goal of reaching the broken water sprinkler unit. Placing the extinguishable flames in a loose line going towards the broken sprinkler was also an effective choice that naturally led users towards their end goal. Though at first it was hard to do the transition away from VR, I’d say that the medium ultimately didn’t majorly affect the implementation of our idea. Our story was there – we just had a slightly different way of telling it now. Finally, I feel that the end result really reflects the mental image most (if not all) of our team members had of the experience. Initially, we each had our own mental image of how the experience would look and feel, but I’d say our game combined all these conceptions we had really well, which I am really happy about. 

Agency question: 

In Fire Planet, a “meaningful” action that we designed for the user is the ability to throw magical orbs, particularly for the purpose of extinguishing fires. This action is triggered by aiming with the mouse and pressing the spacebar to shoot. The design of it incorporates various aspects outside from this pressing mechanic. The projectile path render facilitates the process of aiming at different objects, since the position of the mouse on the screen does not necessarily reflect the raycast aim of the game. The positioning and throwing animation of the hands that gets triggered when the player shoots is an additional element that aims to situate the player more into their character. We didn’t want it to seem like magic orbs are just appearing out of nowhere, which is why having this hand motion was crucial to situate users into the character of our firefighter. The meaningfulness of this throwing action comes with what it is able to allow, which is the ability of reaching the broken sprinkler and fixing it. In a way, this action is a crucial plot device for our story. Having additional outputs from throwing the orb, such as having smoke emerge from the hit location as well as emitting a sizzling sound when the fire gets extinguished are also choices meant to enhance the experience of carrying out this action.

Game demo:

Project 2 Documentation

3D Calculator

Documented by: Keyin

Group Member: Tiger, Ben, Yeji

Project Description

This project is meant to reimagine using a calculator in a three-dimension space. Instead of pressing buttons at fixed positions to get a result at a fixed screen, doing calculation within our project can have more freedom and the visualized process in 3D can break the regular mode but create a functional and portal experience with a calculator. We built the calculator world as an alternate world which is composed of blue grids and glowing cubes. The participant can move, turn around and even float in the sky while manipulating the cubes to do the calculation. Besides the calculator world, we also made a realistic bedroom scene where the alternate calculator world can be triggered by the normal calculator on the desk. Rules in the two worlds are quite different and we tried to utilize the realistic bedroom world to exaggerate the alternate part in the calculator world.

Process and Implementation

Initially after we did the brainstorming, we had quite a few ideas that were possible to extend. We finally chose the one with the idea of 3D cubes. The other ideas, for example, building an alternate backyard, can be fun as well, but we prefer to play around the flexibility of simple 3D shapes in a limited space and make things creative but also simpler and clearer. We thought about the application of 3D cubes including doing the calculation and doing the programming, because most of our group have CS background and these are indeed everyday activities for us. In this project, we implemented the 3D calculator.

The scene would start from an ordinary bedroom and the participant is able to move around in a room scale. When the participant picks up a calculator on the desk, he will be transported to another alternate world with only the calculator and glowing cubes in sight. Originally in VR environment, we thought about making the cubes throwable and the result would drop down from somewhere in the sky so that it could be very immersive and alternate. But later since the project was redirected to be built on PC, in order to adapt to the PC experience where we mostly interact with the mouse, keyboard and the screen, we simplified the effects and dragging objects became our main way of interaction in both scenes.  

We were firstly divided into two to do the two scenes separately and then combined our work together to make it a complete project. When we worked on our own scene, we also thought about the consistency like how to change the scenes and the way of interaction. Tiger and I were working on the first scene and we initially used SceneManager to shift between two different scenes. Ben and Yeji played with visibility in the same scene instead. But it seems both ways can work and here we go with the latter one, because in this case, it’s more convenient to match the camera setting in the two scenes. We met some problems when we did combination, mostly in losing some material and texture. But we fixed them one by one and also improved the effect on the calculator when doing the transition. 

The calculator scene is utilizing snapping to connect cubes and do calculation. By applying the different movement speed and rule, the participant can gain more freedom in this scene. The cubes that involves calculation will turn blue while others remain red, and the result will pop up on top of the “=” cube. If there’s anything wrong with the arrangement of the cubes, the result will show “Error”. As for mouse interaction, it includes left clicking, right clicking, middle clicking and scrolling. It may take time to get used to using this 3D calculator, but it indeed provides a total different experience for this everyday activity.

More detailed process can be found in Development Journal. And here are demos of our two scenes.

First Realistic Scene
Second Alternate Scene

Evaluation

The alternate part of our project lies in the calculator world which can be obviously sensible due to the different environment setting from the real world and the unlimited movement direction for the participant. To guide the participant to interact with the calculator, we made the glowing effect on the calculator whenever the center of the screen is directed to it. Accordingly in the alternate world, we have a big glowing cube functioning as a desk and the calculator is put in the center with the same glowing effect whenever directed. It gives the indication that the calculator is a switch to shift between two scenes. However, compared to the natural interaction of throwing and moving objects in the first realistic scene, the way of interaction in the second alternate scene may not be that intuitive because we are using different clicks on the mouse. But we think adding some instruction can be useful. In general, our project has successfully achieved what we want to create. Even though we deviated form the very original thoughts which was set in a VR world, we still watched the core interaction of our project and made the experience as enriched as possible on PC. The different choices made in the premise of different environment settings is also part of our learning experience.

Agency

The main interaction of our project is no doubt playing with the 3D calculator where the participant can clicking, dragging and moving with keyboard. The action is meaningful because it will finally show the result a person using a calculator is expecting. As I mentioned in the evaluation, we could make the way of interaction more obvious by adding some instruction. But thinking from another angle, the participant may have a chance to explore the alternate world in his/her own understanding by figuring out how to make it work, considering the WASD keyboard and mouse interaction are very intuitive on PC. With very simple setting at the beginning where there’s only a cube with “=“ and a cube functioning as a controller at hand side, It should be easy to find out how to create new cubes and connect them. Besides, we also have designed an action of shifting between two different things. We made the position of calculator fixed and added the highlight effect when it’s hovered in order to indicate it is a medium connecting the two scenes. It also makes sense in a way the participant will desire to see the working space of a 3D calculator when he chooses to click the glowing calculator on the desk. Moreover, we picked the starting point right in front of the staircase in the first scene because we want to encourage the participant to move around and explore by him/herself. And that’s how we think about the idea of agency along the process.

Project 2 Documentation | Fire Planet

Description

The user stands on a barren planet, fires burning all around, with mechanical sprinklers keeping most of the flames at bay. Behind the user towers a city in the distance, surrounded by a dome. A voiceover calls upon the user to use their powers to fight the fires and repair a sprinkler that has broken. Using their abilities as the designated protector, the user can fire projectiles from their hands to extinguish the fires and clear a path to the sprinkler. Reaching it, the sprinkler reactivates, and the voiceover congratulates them. In short, the user firefights on a different planet.

Process

We started thinking about interesting, observable, everyday actions. Building upon our ideas from the in-class activity and considering the controllers of the HTC Vive, we landed on the action of waving a fan. Simultaneously and iteratively, we considered what situations waving a fan do occur and could occur inside of an alternate reality. This led us to think about firefighting with a fan. With this action and character context, we built a narrative of a civilization under constant threat from the fires of the planet they live on. 

We imagined what the life of a firefighter in such a civilization would be like. A civilization under constant threat of fires would establish more permanent and secure defenses against the ever-present threat of fire. Hence, the city of the civilization is covered by a dome, and the dome is surrounded by large mechanical sprinklers. The role of a firefighter as a first-responder shifts slightly in this reality, then, as they are tasked with repairing and maintaining the defenses rather than fighting the fires themselves. This translated into the user repairing a broken sprinkler, using a fan to fight the fires which had encroached inside the boundary in the meantime.

storyboard

Due to the change from a VR system to a computer system, we had to reconsider the interaction of the fan. While an HTC Vive controller mimics the handle of a fan and can be waved independently of the user turning their head, waving a mouse around in first-person would force the user’s head and aim to move erratically or require an unconventional restricting of movement. Because of these affordance problems, we changed to an earlier idea: throwing water balloons. The FPS-reminiscent setup allows a clearer connection to what user’s already expect from a mouse-and-keyboard game. Upon in-class feedback, we abstracted the water balloons into spells.

With the world, character, and interaction worked out, we moved to divide up the work and start building. Steven worked on the scene design and hand animation. Mari worked on the projectiles. I worked on the particle system interaction between the extinguishing projectile and the fires.

To create the effect of an endless fire, the fires had to burn continuously and reignite if extinguished. The reignition had to be delayed in order for a path to be cleared by the user. After exploring the asset store for different fire systems, I started to use the Fire Propagation System. I followed the use instructions, learned exactly how all of the parts worked, and realized it would not work. The fire system was too realistic. The fires in this system burned based upon available fuel and different material and climate properties. After the calculated “HP” of a material is reached by the ignition system, a fire starts, and then burns until the materials fuel value is reached. Unfortunately, we did not want a fire that would always die.

I read through the sections of the Unity reference on particle systems and collision and then found tutorials on how to script delays of actions and to disable and reenable in game objects. With this research, I was able to write two scripts. One script detects collisions between the extinguishing particle system and the fire particle system and disables the fire system upon collision. The other script detects when fires have been disabled and then enables each particle system after a given delay. With this much simpler method, the affordances of the extinguishing spells and the response of the environment is much clearer.

The scripts were combined with Mari and Steven’s work. As final touches, a voiceover was added to clarify the narrative and contextualize the user’s role in the experience, and a cylindrical indicator was also used to guide the user to the disabled sprinkler.

Scripts

Script Demo Images

Reflection

In the end, this experience successfully establishes an everyday activity in an alternate world. It approaches this challenge by considering a specific role, firefighter, and restructures that role in an alternate reality, the fire planet. The voiceover, hand animation, particle path, and fire system all clearly demonstrate or reinforce the different affordances of the experience. While the world and the character matched what we envisioned early on, the interaction itself lacks much of the more seamless interaction afforded by a VR system and its controllers. While the materiality and contextualization of the interaction is unique, we did not have the opportunity to explore the potentialities of a new medium, the way creating a fanning interaction in VR would have.

Agency

The main action designed for the user in Fire Planet is extinguishing fires. Its meaningful quality relies upon the instinctive perception of uncontained fire as a threat, the intuitive perception of a city as something to be defended, and the provided (via voiceover) impetus of the user’s role as a protector. The planet’s features and orientation of the user help establish these perceptions. The uncontrolled fires contrast against the fires controlled by the functioning sprinklers. The user starts close to and facing the uncontrolled fires, with the city behind them. Placing them between danger and the endangered, and having them face the danger rather than face away from it, suggests the role of a user as a guardian about to defend rather than a victim about to flee. The voiceover then makes all of this explicit through narrative, establishing a motive and a specific task, repairing the sprinkler, that will allow them to fulfill their role completely. All of these elements combine to drive the user to extinguish the fires.