Documentation | Final Project

PROJECT DESCRIPTION

  • Team member: Nhi, Luize, and Tiger
  • Theme: Apocalypse.
  • Space & Storyness: Our project is the combination of the experiences of our daily life in a small apartment and the underwater experience. This alternate reality is created with our hope to connect our daily life actions and their negative impact on the ocean, embracing the message of the inter-relation between human daily activities and nature. Through the perspective of being in a normal life that every user is used to, the user’s interactions with the food objects, such as plastic bottles and bags of chips, serve as the representation of our excessive plastic use daily. We may consciously or unconsciously use plastic in our life because of its durability, strength, and low cost that make plastic the material of choice for products. The message of ocean pollution is conveyed in the video of 20 seconds that the user will watch before diving in the next underwater scene. Without any special equipment, the user enters the underwater world, diving in its beauty before this world gets completely destroyed. Using the plastic objects that are frequently used in our daily life to contaminate the underwater world that the user explores, we hope to emphasize the negative impact our daily life activities can put on the ocean, and nature in general.
  • Storyboard for the project: Our initial storyboard consists of 9 scenes with the first scene describing our daily life and the next 8 scenes describing the underwater experience throughout 3 centuries. However, we reduced the number of scenes to two main scenes described above.
  • Playtest video:
An alternate realities experience designed for Google Cardboard, with the experience of exploring the interrelation of both worlds – our daily life and the underwater world.

PROCESS / IMPLEMENTATION

Project idea development: Our final choice of theme diverged from our initial idea for the project. At first, we decided to pursue the combination of two themes, namely “interpretation of a city” and “escape room”, as the descriptions of different invisible cities and our current social distancing situation amid Covid-19 pandemic served as inspiration for project ideas.

However, as our team also put our main focus on creating a meaningful message for the users and a sense of storyness in our project, through lengthened discussion, we had come to a final choice of theme, “apocalypse”, which is different from what we started out with. In specific, we hope to portray to the users how our actions in daily life, whether conscious or unconscious, significantly contribute to ocean pollution.

The role of the users in the experience is the character with impact. In the first scene, the users are able to interact with the food objects, TV, and switch scene using the coral. In the second scene, the users, although mostly observing the underwater world, leave the trace of plastics wherever they go, and can switch back to the previous world through the coral at the end of the path of their exploration.

The final story of the project: With our unconscious consumption of plastic every day, we use different food objects, such as plastic bottles and bags of chips, serving as a first main interaction. We designed the first scene with various hints towards the next underwater scene, using corals of different types and colors. After the users interact with the food objects through grabbing and releasing them, they can also interact with the TV where a video of 20 seconds showing the current alarming ocean pollution. After watching the video, the coral on the shelf next to the TV would light up, inviting the users for interaction, which is for switching to the next scene. In the underwater space, it first starts out with a beautiful and lively scene. The users can walk around, but there is a designated path indicated by the two lines of plants. The scene would change gradually: the environment becomes darker; the fish disappears gradually. Every time the users walk around, they will leave a trace of plastic behind them. These traces of plastic consists of food objects from the previous scene.

The role of the users in the experience is the character with impact. In the first scene, the users are able to interact with the food objects, TV, and switch scene using the coral. In the second scene, the users, although mostly observing the underwater world, leave the trace of plastics wherever they go, and can switch back to the previous world through the coral at the end of the path of their exploration.

Scene 1:

  • Plastic containers around the room. User can interact with them (grab/release)
  • Television with video about ocean plastic pollution
  • After watching, user can teleport to next scene

Scene 2:

  • User enters into a pristine underwater environment – beautiful, vibrant, lively in the beginning, encouraging the user want to explore more
  • Users follow a plant path
  • After a while, a trail of plastic starts appearing behind them and around in the ocean, fish disappearing, and the lighting darkening.


Implementation of the project: Our project consists of two main tasks, designing on the two main scenes and building interactions. I worked mainly on designing the first scene and writing scripts for the interactions in both scenes.

  • After the first playtest session in our class, we realized that our initial design for the first scene does not reflect a strong correlation with the second underwater world. The playtesters were confused when exploring the scene and did not expect that our project had a second scene. Hence, we decided to re-design the first scene, adding corals as the main decorations for the room.

For building the interactions in the project, I wrote several scripts, and partially used the GoogleVR package for event triggers:

  • PlayerWalk: this script is attached to the player object in the first scene. Due to the fact that the only action available in Google Cardboard is clicking the button, the user would be moving by clicking the button every time at the speed of 1 (the speed is public, thus can be changed). The user would not be able to walk when grabbing a food object in hand.
  • PlayerWalkUnderwater: this script is attached to the player object in the second scene. In this scene, we set the y-direction constant and adjust the camera’s height so it creates a feeling as if the user is swimming underwater instead of walking on the seabed.
  • Food interaction: this script is attached to the player objet and the food objects, enabled the user to grab/release the object whenever the user points the reticle towards the food objects.
  • Food collision: this script is attached to the food object to detect collision with other objects in the room. However, as the user grabs the food object, its kinematic is set to true, thus ignoring collision. The fix for this problem is Edit -> Project Setting -> Physics -> Contact Pairs Mode set to Enable Kinematic Static Pairs. This will make sure that the collision is still detected when the object is grabbed in hand, thus releasing the object whenever it collides with other game objects in the room. This is still not yet perfect. When we playtested to see whether the script worked properly, the objects sometimes still went through objects.
  • CoralChangeScene: this script is attached to the coral, which serves as the switching point between two scenes. The coral object with this script is only enabled for event triggers after the user watches the video of ocean pollution.
Coral lights up, inviting the user for interaction (switching scene)
  • RayCasting: this script is attached to the player object. When the hit.distance from the user to the object in the room is smaller than 1.2f, the user would not be able to move through other objects, such as walls.
  • TriggerTV: this script is attached to player object, with the event trigger added to the TV object. This also only allows the coral for switching scene after the user finishes watching the video.
The video of 20s about ocean pollution
  • Cursorhide: this script is added to hide the cursor, leaving only the reticle pointer in the scene.
  • For scripts in underwater, including ChangeLightIntensity, DeletePlastic, MovingFish, MovingShark, SpawnPlastics, I fixed the frameCount to adjust the speed and amount of plastic generated and the time when the fish disappears and the light becomes darker.

We also thought of whether we should change anything in the first room when the user goes back. After the discussion, we all agreed that we decided to keep it the same. Although it is easy to realize how much plastic can be generated by one human being, it is extremely difficult to significantly reduce the use of plastics in our daily life due to its convenience and massive use in products. This is the representation of an infinite loop that can only be broken by the change in the awareness and behavior.

EVALUATION / REFLECTION

The project is an amazing team collaboration experience. As we work throughout each scene, we realized that there were a few problems that we underestimated when we first started. For example, the scope of our project in Google Cardboard limited our choice for interactions, hence when using the clicking button for both walking and grabbing/releasing the object, we need to reconsider when we should enable the walking for the user. Another problem we encountered during the project development was collision detection and interaction with food objects. With the great support from Professor Sarah Krom, we were able to fix the issues in the end.

In addition, the choice to reduce from 9 scenes to only 2 scenes turned out to be the right choice. Due to the time constraints, this simplification still represents the message we want to convey in our project.

Although we did not have time to add interactions with the fish and other plants in the underwater world, the final witness of plastic overwhelmingly occupying the ocean, in the end, is quite powerful for us when we playtested our project.

Development Journal | Final Project

PROJECT MEMBER: Luize, Nhi, and Tiger

PROJECT THEME: Apocalypse/Escape Room

IDEA DEVELOPMENT:

In our first meeting (via Zoom), we decided a few elements that we want to explicitly convey in our projects before brainstorming: 4 final project theme ideas (Apocalypse, Escape room, Wonderland, Create an interpretation of a city from Invisible cities), a fictional space, interactions, events, and the sense of storyness.

In the beginning, we thought of recreating 3 cities from Invisible cities: FEDORA, OLIVIA, ESMERALDA. The theme would be escape room & Invisible cities interpretation: link to the current situation where everyone is trapped in their own space and try to escape the state of mind, connecting with other people through the Internet – a way of escaping the reality we are living in right now. Each city has different unique inhabitants, for example, OLIVIA has skeletons since it reflects the industrialization and the repetitiveness of the work people do every day, etc.


However, since our main focus of the project is the sense of storyness, we found that our approach of recreating the invisible cities did not reflect what we wanted. We brainstormed a new different idea of the escape room theme. The context would be: Protagonist (the user) is a prisoner, wakes up from amnesia, finds themselves in a dark small cell. There is a giant clock on the wall of the cell showing red digits and counting down from 1 hour. This, hopefully, triggers the anxiety and makes the user look for tools and try jailbreak. When the user successfully finds the door to escape, there would be different scenes waiting for them (representation of the past, present, and future of a person). In the final scene, the user can find the door to bring them back to reality. The whole message we want to present through this idea is that every moment in life is precious.
This is a better idea compared to the first, but we encountered one problem. Since each user will have their own experiences, there would be no generic way to layout the scenes that can evoke the feelings/emotions for the user to reflect on. Therefore, we decided to revise the idea into a more neutral setting, which is the undersea environment.


Final idea:

  • Beginning scene: neutral, white background – a television (off) -> user must interact (turn on/off) the TV to enter the undersea alternate reality world.
  • The same idea before but different scenes, different message: travel through time in the ocean to see how the environmental change reflects during each time period.
  • THEME: Apocalypse
  • The user would be under the sea, there would be a line (road) that indicates where the user should move to (sunken ship (1920), submarine (2020)) where they would find a button to enter the same scene 100 years later.
  • The scenes (3 scenes), in each time period, you still have a clock/sign somewhere to indicate the time (years, explicitly)
  • 1st scene (past, 1920): no plastic
  • 2nd scene (present, 2020): a lot of plastic but still can be saved
  • 3rd scene (future, 2120): a lot of plastic and no animals -> can’t be saved anymore. In this 3rd scene, the user will need to dig into the plastic in order to find the button and travel back to the current time (reality).
  • You go back to the present (the beginning scene) and take action to do something to save the environment.
  • Message: save the world before it’s too late.


Some clarifications/class feedback/adjustments:

1. The meaning of the TV in the first and last scene: The user needs to interact with the TV in order to move to the undersea scene. What we had in mind was that we could try showing different scenes of the ocean before the user actually experiences it. It’s similar to the fact that most people would just know about the undersea world through the screen but not by actually experiencing it.

2. Reduce the number of scenes to 4-5 scenes: Though it’s a lot of scenes, it would basically come down to these 3 ideas: first is the TV scene which is extremely simple, second is the outside undersea scene (appear three times with different levels of destruction) and third is the inside scene (sunken ship and submarine). In short, we only layout 3 main scenes and then replace a few things in each scene to demonstrate what we want.

3. The use of the button (click): not to have a literal button that triggers changes but something more subtle that blends in with the story – the nautilus

UPDATE April 17, 2020

Tiger and I finished the list of needed assets and created the first scene in our project. In this scene, we added a screen to the TV, which will be used to display the video later.

First room the user enters

UPDATE April 20, 2020

After the lecture and class discussion on Procedural Authorship, our team felt like in our project, the players would be in the role of “Ghost without impact” since they will only observe what happened throughout 300 years and do not have any real impact on the environment. Hence, we decided to create some interactions between the user and the environment and limit our scenes to only 2 main scenes:

  • The first scene: the users will enter an apartment (which is also their house in the game) where they see some snacks, water bottles, cups, and cans on the table and on the floor. They will get a chance to interact with the objects by grabbing and releasing them. They can also move by clicking the mouse (clicking the button in Google cardboard). The main point in the scene is when they turn on the TV and watch a video/a teaser of the experience they will experience next.
  • The second scene: the users enter the underwater scene of 100 years ago. When they explore and interact with the undersea animal, they will leave a trace of plastic behind them (can be the cans, water bottles, or those cups that they saw in the first scene). We also hope to make the scene gradually polluted (sea animals/plants gradually die) that also represents the 3 initial 3 scenes we had in mind (1920, 2020, 2120).

UPDATE April 26, 2020

We finished laying out two basic scenes.

In the first scene, I added objects for user interactions such as cans, chips, water bottles, coffee cups, and wrote scripts for PlayerGrab, PlayerWalk. I also wrote the SceneCtrl for switching scenes later. I also added event triggers for the objects so that when the users gaze at the objects, they can click the mouse/button in Google Cardboard to grab/release the objects.

UPDATE April 29, 2020

After the team check-in, we all agreed on the current design of the environments (of the room and of the underwater scenes) and finalized the interactions we are going to add in the underwater scene. Currently, the user is able to look around using ctrl + moving the mouse in the direction they want to see. They also are able to walk by clicking the mouse (the button clicking in Google Cardboard) in the two scenes.

  • The final interaction we are going to add in the white room is the user’s interaction with the TV. When the user looks at the TV, it is expected to change color from black to white. When the user clicks on the TV screen, it is going to show the below video, which is designed by our team member Luize.
  • In the underwater scene, every time the user walks around, they will leave a trace of plastic behind them. They can also interact with the sea creatures and animals, and there would not be any immediate effects. However, the scene would change gradually: the environment becomes darker; the fish disappears gradually, etc. The user might not notice this but over a period of time, the change would be significant enough for them to realize their negative impact on the ocean.

UPDATE May 05, 2020

After the first playtesting, we realized that the first scene was not well designed and thus prevented the user from interacting with the objects in the room. Since we wanted to create a setting that truly reflects the daily life in an apartment, we decided to recreate the scene. I was in charge of redesigning the scene and adding interactions in this scene, while Tiger and Luize focused on redesigning the second scene.

In this first scene, I added corals and sharks to hint the user towards something related to the underwater scene. When the user interacts with the objects (chips, coffee cup, milk bottle), they would be constrained on the vertical line, only moving up and down the objects. I limited the movement because I could not figure out the way to make it look natural when the user drops the object. Also, the user can click on the TV screen and the TV will show the video. After the video finishes, the coral on the TV shelf would be lit up, inviting the user to interact with the coral. When they click on the coral, it would lead them to the second scene.

In this underwater scene, after Tiger and Luize finished the design, I added the player walk movement in this scene to make it consistent with the movement in the previous scene.

UPDATE May 08, 2020

After the second playtesting, we realize that the constraint on the movement of the objects made the interaction meaningless. Professor Sarah Krom has been really supportive and helped us out with this problem (by adding a Rigidbody to the food object so we can take advantage of physics when dropping it). I am currently finishing the final touch on the interactions of these objects. I also added the script to hide cursor whenever the user enters the scene.

UPDATE May 11, 2020

The scripts for the food objects worked perfectly thanks to the help of Professor Sarah Krom. The user is able to grab the food objects and drops them anywhere they want. However, there was one problem I encountered when I was working on this part. As we grabbed the food object, its kinematic is set to true, thus ignoring collision. The fix for this problem is Edit -> Project Setting -> Physics -> Contact Pairs Mode set to Enable Kinematic Static Pairs. This will make sure that the collision is still detected when the object is grabbed in hand, thus releasing the object whenever it collides with other game objects in the room.


UPDATE May 12, 2020

While Tiger and I worked on the final touch, mostly for the second scene, for the project, Luize prepared the presentation. I replaced the FPS controller with the player object that can only move by clicking the mouse. Since the movement in the underwater is different from movement on the ground, we decided to keep the movement of the user near the seabed as if the user is swimming through the path.

We also adjusted the frameCount in the scripts to change the speed and the number of plastics, the change of light, and the disappearance of the fish in the ocean. We also adjust the switching scene script to enable to user to go back to the previous room, which is their daily life.

We also thought whether we should change anything in the first room when the user goes back. After discussion, we all agreed that we decided to keep it the same because without any change in behaviour, we cannot expect the change that easy in a person’s daily life. This is the representation of a infinite loop that can only be broken by the change in the awareness and behaviour. And though it is easy to realize how much plastic can be generated by one human being, it is challenging to replace the convenience of plastic in our daily life even though we realize the negative impact of it on the environment.

Invisible Cities Response

Invisible Cities is an interesting read that gives wings to let my imagination fly among different unique cities. Olivia in chapter 4 is among those that impress me the most.

Olivia – Etching by Colleen Corradi Brannigan

The opening in this chapter talks about that acceptance of failure is worse than failure itself. One of my favorite sentences in the book lies in this opening, that is “If you want to know how much darkness there is around you, you must sharpen your eye, peering at the faint lights in the distance.” It is the need to find the good even in the darkest times, and it reminds me of a city in Vietnam – Saigon – that later somehow resembles Olivia to my surprise.

Olivia, “a city rich in products and in profits,” can be “indicated its prosperity only by speaking of filigree palaces with fringed cushions on the seats by the mullioned windows.” It is not only described by its look, but also signatured by the leather smell of saddlers’ shops, by the sounds of women chattering, and by “an action repeated by thousands of hands thousands of times at the pace established for each shift.” Marco Polo’s need to use different words to describe Olivia allures to the fact that there is no true perception of the city: each person forms their own understanding and perception of the city upon their position in society. The inability to acquire one true description of Olivia, as Marco Polo later remarks, is also because of the city itself: “Falsehood is not in words; it is in things.” Olivia, in itself, is impossible to be perceived in one true way.

This part of Olivia reminds me of my city. It reminds me of myself sitting on Saigon river’s bank, on the side of Binh Thanh district looking towards the lights of skyscrapers on the other side that create a magnificent skyline. Saigon, just like Olivia, is also “rich in products and in profits.” Saigon, like Olivia, is impossible to be perceived in one true way. Ask a person to describe it to you and you will get a different answer – or perhaps an entirely different city – each time. One man frequenting the skyscrapers will paint for you a glorious Saigon. Cross the river and the people residing by the riverbank will tell you about a peaceful side of it. But just as the great Kublai Khan does, one must remember that “the city must never be confused with the words that describe it.” In all of its glory or peace, in every of its skyscrapers or terraces, Saigon in itself is a city of a million colors. Why describe it with just one color, one adjective?

Another detail of Olivia that resonates with my city is the repetitive cycle of a human’s life working for the industry, living one identical day after another. This cycle, as monotonous and perpetual as it is, if omitted, will lead to the collapse of the whole system. The repeated labor of a human is, after all, an indispensable gear in the industry.

Saigon Skyline, Vietnam

Project 2 Documentation: Boxing with Ares

PROJECT DESCRIPTION

  • Team member: Neyva, Nhi, and Vince
  • Environment: dark and ominous, yet there is still an indication of hope
  • Boxing with Ares is an interactive experience that aims to immerse the users in a bloody reality. We focus on creating an environment similar to the one in the movie The Matrix, which is simple yet gives a sense of exploration for the users by interacting with what lies in the scene. Our scene consists of the plain ground with foggy surroundings, the red bloody sky scattered with floating small punching bags, and the main punching bag for interaction. Our basic interaction is punching action with the emphasis on the unexpectedness – doves flying out in different angles, speeds, and positions around the punching bag. While the interaction is analogous for finding hope in the darkest time, it is also open to the users’ interpretation that hope is escaping their reach, impossible to capture. In either interpretation, this still serves as an alternate reality experience that deals with different manifestations of hope, be it the shining beacon in the darkest of times or the flickering illusion that forever remains out of human reach.
Dark, ominous, and bloody environment
Movie scene in The Matrix

PROCESS & IMPLEMENTATION

  • Our initial idea for the project is completely different from our final one, though the punching action remains the same. In our first sketch, we planned to have Punching the bag (every time the user punches the bag, besides the normal oscillation, there would be white/black doves flying out magically. The white doves represent peace, and the black doves represent the concept of war), Pressing the button (this button would change the color of the doves. Every time the user presses the button, the color will change from white to black and vice versa), and Theater stage: in the back of the stage, we are considering putting words/colors/pictures to reflect the theme of our project.
Our first sketch for our project
  • After our first idea presentation and the current switch to online class, we decided to constraint our project to one interaction – punching bag & doves flying out – and replaced the theater stage with an ominous empty environment.
  • Our idea for the environment was inspired by the photo below.
Environment
  • After deciding on the final idea for our project, we divided the work as follow: Neyva worked on building the environment, Vince and I worked on developing the interaction.
  • Our final environment reflects the true identity we wanted to bring in this project. Using a grid to lay out small punching bags dotted in the dark bloody sky and using particle systems to create the fog effects in the scene, we created the environment as an implication of darkest times, creating a feeling of loneliness while the player stands and observes the scene.
Red bloody sky
Small punching bags
  • For building the interactions, we wrote 5 main scripts in our project:
  1. animationController: trigger the animation whenever a punch happens. We also put a Punch class that will return the anim.GetCurrentAnimatorStateInfo(0).normalizedTime in the get method.
  2. collisionDetector: if we detect a collision and the value anim.GetCurrentAnimatorStateInfo(0).normalizedTime < 1, we play the punching sound as if the player punches the bag.
  3. RaycastTracking: we decided to use raycast to brighten the color of the punching bag whenever the player looks at it. This aims to attract the player’s attention and serves as an invitation for interaction.
  4. ChangeColor: change the emission color. However, in the end, we decided to not use this feature because the fog we added using the particle system makes the change not obvious anymore.
  5. BirdGenerator: we decided to create a class bird and encapsulate all attributes into this class, and when we dynamically create the birds, we assign the random values to the attributes of the birds. This makes things consistent and easier for us if we want to add additional attributes in the future.
  • Camera: we used First Person Controller camera and boxing man are the children object of First Person Controller. By doing so, when we move the mouse, the boxing man would move accordingly. We also positioned the camera and limited the looking angle (x rotation: -60 to 45) so that the player can only see the hands of the boxing man and the space above
Camera
  • Animation: we have two animations in this project: boxing man animation and bird animation.
Bird prefab

Sound effect: we added background sound (https://www.youtube.com/watch?v=Qm-El3qztgw) and punching sound to make the user experience more interactive.

REFLECTION & EVALUATION

This project is an amazing learning process and great team collaboration. Though our final project looks different from what we envisioned in the beginning, in my opinion, it is a successful adjustment to feedback and improvements during the development of the project. In addition, the process of writing these scripts and debugging was really frustrating but rewarding in the end. Being able to understand and adjust the available resources we found in class and online to implement it into our project is definitely a great learning process. While we could not use VR for this project, the project still reflects what we want to deliver and even more successful than we thought it would be.

AGENCY

Agency has been described as “the satisfying power to take meaningful action and see the results of our decisions and choices” (Murray) and “the actions players desire are among those that they can take” (Wardrip-Fruin). In our project, the reason we choose to create an environment where there is only a punching bag facing the player when the player enters the scene is to invite him/her for an interaction with the punching bag. While the punching bag immediately captures the players’ attention and hints them at punching action, it also triggers the confusion and hesitation in this ominous environment. Every time the player punches the bag, there would be a punching sound effect that gives the player a more real and powerful experience. The more they punch, the larger the number of birds flying out. Yet the birds will fly away from the player out of the reach. This reflects the aforementioned interpretations in the Project Description, giving the player some thoughts and reflections on his/her own.

Project 2 – Development Journal

Project theme: Peace & War

Project members: Neyva, Vince, and Nhi

Idea development: 

In our first project meeting, we built upon our class discussion about Lo-Tech VR Interaction Exercise, choosing boxing/punching as one of the two main interactions in the environment we want to create. I personally like this idea because it reflects a strong action and we can integrate any unexpected response out of a punch. The users may expect the punching bag to oscillate back and forth but would have no idea what effects/responses beyond the norm. Having said that, we also hope to develop our project to resonate with one of our class readings “Responsive Environments” by Myron Krueger, making the user experience and environment more interactive and responsive. For the second interaction, we first came up with the light switch, which is an everyday activity. We tried to connect the light switch with the punching action to make a cohesive story in an artificial reality. In the end, we slightly modified the idea and chose to go with a button on a pedestal. While discussing the two actions, punching a bag and pressing a button, which is almost part of our everyday life, we hope to alternate daily interactions into some dreamlike effects in our project. 

In specific, the setting would a theater stage where the punching bag and the button would be located on two opposite sides. We alternate the reality by:

  • Punching the bag: every time the user punches the bag, besides the normal oscillation, there would be white/black doves flying out magically. The white doves represent peace, and the black doves represent the concept of war. 
  • Pressing the button: this button would change the color of the doves. Every time the user presses the button, the color will change from white to black and vice versa.
  • The stage: in the back of the stage, we are considering putting words/colors/pictures to reflect the theme of our project.

The main idea behind our project development is to demonstrate how fragile the peace and the world we are currently living in and taking for granted. When punching the bag, the users would tend to punch harder and harder to see if there is any change in the effects. Of course, there would be: more doves would fly out. However, if we take a closer look, such an increase in the intensity of each punch represents the rising conflicts among individuals/nations. Moreover, just by pressing one button, we are able to enter the state of war. This button is also the symbol of war threats such as nuclear power/weapons – by just pressing a button, peace no longer exists; a war has begun (white dove-peace changes to black dove-war). 

Project development:

We first sketched the initial 360 view of our project.

Our first sketch of the project

We have found a few free assets that could be beneficial for our project, including stage, punching bag, gloves.

March 9, 2020

After our team discussion, we decided to minimize our interaction to only punching bag – birds flying out. The environment would be similar to the environment below

March 10, 2020

Vince and I worked on interactions of the punching bag.

The first problem we encountered was to control the boxing man when we moved the mouse. I wrote a separate script for the camera controller so that the camera position = box_man position + (proper) offset, and smoothened the interactions by using Vector3.Lerp(). However, after a few trials, we decided to use FPSController which already has the script and makes our life easier. We used the first person controller (FPSController) prefab (renamed it with “player”) and put the box_man prefab as a child of FPSController. By doing so, when we move the mouse, the boxing would move accordingly. We also positioned the camera and limited the looking angle so that we only see the hands of the boxing man.

The other part we worked on is collision detection. We decided to put the script of collision detection on the punching bag – every time there is a punch, the collision will be detected, and we will trigger the animation. In order to do that, besides detecting the collision, we also need to detect whether it is a proper punching action or not (since the collision detection can also happen when the boxing man just walks towards the punching bag and touch the bag ). We have two scripts for this: animationController and collisionDetector. In animationController, besides triggering the animation, we also put the Punch class that will return the anim.GetCurrentAnimatorStateInfo(0).normalizedTime in the get method. In collisionDetector, if we detect a collision and (call Punch and the time) < 1, we play the punch sound as if the person punches the bag.

March 11, 2020

After finishing the collision detection for the punching bag, we decided to use raycast to lighten the color of the punching bag whenever the player looks at it (to attract attention).

First, we reused the code we learned in class. However, the color did not change even though the debugging message indicated that the player was looking at the punching bag. We tried to switch the tag and the layer of different components related to the punching bag to see where the problem was. After around 1 hour, we found the problem.

First, hit.transform.gameObject did not work since the punching bag is the child component, and we need to use hit.collider.transform.gameObject instead.

Second, we can change the color of the prefab; however, the changes were not obvious to the player. Hence, we decided to change the emission color so the bag would brighten whenever the player looks at it.

March 12, 2020

We started working on generating birds every time the player punches the bag. We wrote the script birdGenerator.cs to handle this behavior.

In this script, we initialize an arraylist to store the birds. If we receive hit signal from collision detector from collisionDetector, we will instantiate a clone of the bird with different speed and position (using Random.Range) and add it to the arraylist. Later we just loop through this arraylist and set the moving direction and speed for the bird.

March 13, 2020

We worked on changing the angle of the bird animation and setting a flying path for the bird. Since we wanted the birds to fly out at a random angle, speed, and position around the punching bag, we decided to create a class bird and encapsulate all attributes into this class, and when we dynamically create the birds, we assign the random values to the attributes of the birds. This makes things consistent and easier for us if we want to add additional attributes in the future.

March 14, 2020

We worked on changing the color for the birds. The bird prefab originally had the color of red. Vince and I decided to experiment with the color when we received the environment from Neyva.

First, we tried giving each bird a random color (by giving random values for R,G,B). However, this variety of colors made the scene look extremely low poly and does not fit with what we had in mind for our final scene and our final concept of the project. Hence, we decided to go back to our original idea of doves and consistent white color for the bird.

We also decided to add background music to this project. Since we want to convey of dark & ominous environment

March 15, 2020

We worked on final touches for our project. After Neyva added the particle systems to create the fog effects, we decided not to use raycast anymore. The original idea of the raycast is to brighten up the punching bag a little bit to invite the player for a potential interaction. However, the fog makes the “brightening up” of the punching bag not obvious, so we in the end decided not to keep it. We also worked on other small fixes and on our presentation:

  • Fix the position, density, and scale of the particle systems
  • Fix the color of the clouds and the skybox
  • Work on presentation and divide the part among team members

Response as a medium

How does response act as a medium? 

The paper “Responsive Environments” by Myron W. Krueger provides great insights into the concept of responsive environments which “perceives human behavior and responds with intelligent auditory and visual feedback.” (423)  By mentioning different projects he worked on, such as GLOWFLOW, MAZE, and VIDEOPLACE, he suggests a new art medium based on real-time response and interaction between humans and machines. He asserts that “Response is a medium.” (430)  It is important for such a medium to understand the action of the participant and respond intelligently. For example, considering the participant’s position, the environment can create an interactive experience involving different sensations by imposing effects on “lights, sound mechanical movement, or through any means that can be perceived.” (430) Some are even more advanced by learning from experiences with different individuals and responding in the most effective way from the judgments. The upshot of this is that demonstrating intelligent responses towards users/participants, or in other words responses towards human actions, acts as a medium.

From my perspective, the paper is a good read as it gives me such an eye-opening understanding of the responsive environment and its applications in different fields, including education, psychology, and psychotherapy. For example, although such environments still have a few constraints in the perceptual system which limit their responses in a certain way, they can serve as a great alternative to traditional teaching or enrich the students’ experiences through meaningful interactions.

Project 1 “Wonderland Syndrome” – Documentation

Project Description

In this project, I decided to recreate a cartoon scene from my favorite childhood movie “Alice in Wonderland”. With the hope to alternate reality into a more cartoon-alike world, I wanted to focus on the specific scene when Alice meets the Cheshire Cat. This scene would be similar to the one in this Youtube video. In this one, the Cheshire Cat is the symbol of mystery and magic while Alice’s experience, in my opinion, would be described as confused, lost, and uncertain. Those are also the main identity I chose as the main guide during the development of the project. Please find the APK and Project File of the project here.

Final sketch of the project

Even though the final result of the project slightly diverged from what I intended to create, it still reflects the identity that I want to bring into the whole scene. The scene is a long path in a forest where two sides of the path are completely opposite. One side is full of trees and bright fireflies, which reflects a positive sense (1), while, on the other side, it is full of darkness, dead trees scattered along the path and a few tombs lit up by the mysterious blue light (2). By doing so, I hope the users would experience complete opposition when turning around and see the scenes. While feeling triggered to explore the scary side, they would be conflicting when deciding whether they should choose the safer side.

Process and Implementation

I worked on the project for a whole week. Since I never had any prior experience in IM and Unity, it took me quite a lot of time to understand how Unity works and how to use basic Unity functions and to find assets for the project. Since the Cheshire Cat asset was too difficult to find or recreate, I decided to remove the cat while aiming to keep the identity as previously mentioned.

I started by laying out the path. Since I could not find any asset that has the right shape I wanted for the path, I decided to create the path manually by putting the cubes of decreasing size towards two ends of the path. The decrease in the size of the path also reflects the depth, which implies the continuity of the path beyond the scope of my scene. 

First layout of the path

After finishing the path, I put a few mountains in the scene and trees for side (1). Though the process was repetitive, I changed the space among the trees and rotated the trees so they look different to each other and reflect a sense of forests where trees are never equally spaced. After that, I added fireflies of different sizes and different distances from the viewing point. For these fireflies, I chose a strong yellow-green color for them to create a sense of liveliness and brightness. 

On the other side (2), I went for a completely opposite layout. At first, I only added the dead trees are scattered with a darker color towards the horizon. I also added two colorful posts to explicitly indicate the border between two concepts. However, after finishing laying out the trees, I felt that it was empty and did not reflect the identity I wanted to create. The posts were too much of unnecessity while I could implicitly demonstrate the opposition by the contrast among the objects themselves. Therefore, I decided to remove the posts, added mountains far away and a few more tombs lit up by the mysterious blue color.

On the road where it was too flat, I added a few rocks to imply a sense of obstacle. With the attempt to make the user feel even more lonely in such a remote place, I added the sound of wind into the project. I believe that this addition will make the experience even more real for the users and enhance the immersion in the cartoon scene.

Side (1) of the path which reflects the sense of brightness and safety
Side (2) of the path which reflects the sense of mystery and fear
More of the bright side
Towards the darker side

Finally, I worked on the lighting of the objects and the directional light. In the beginning, the lighting was too bright that it did not fit with the night setting in the scene. Hence, I changed it to a much darker lighting to fit with the situation and did not realize that it was too dark. This is the biggest issue in my project as I did not realize that the lighting of my own laptop screen could affect how I fixed the light, thus changing the lighting from what I actually thought I set for the scene. Although I discovered the problem at the very end after presenting my project, I want to learn properly how the lighting works in this case and the proper way to set the lighting in the future. The lighting for the moon and the fireflies works well in the scene. The emission effect and the shadow effect of the trees create the immersion for the scene and make it more of a 3D space.

The lighting was too bright in the beginning that it did not fit with the night setting

Reflection/Evaluation

I hope that for the user experience, my project evokes a sense of confusion, conflict, and a bit of mystery while alternating the reality into a cartoon scene of “Alice in Wonderland.” The project is a great learning experience for me with great help and feedback from Professor Sarah Krom throughout the whole development process. It, though not close to the original scene I wanted, reflects the identity I hope to create. I also come to learn that small details such as shadow can bring such a huge difference in the make user experience more real. I wish I could have added a few more movements in the scene to make it more immersive, such as the movement of the fireflies. I have not seen my project on an Android phone with Google cardboard and look forward to the experience and learning the differences on PC and on Android.

Development Journal | First Project “Wonderland Syndrome”

1. Idea Development:

My first idea of the project was inspired by one of my favorite movies since childhood “Alice in Wonderland.” I planned to recreate different scenes in the movie with a focus on landscape, lighting, interactions, and movements.

For example, by eating the mushrooms in one scene, the person can experience a change of scale.

I sketched out different scenes I wanted to include in the project as indicated in the image attached below.

First sketch of the project

However, it would be extremely difficult for me to include all such details and interactions in my first project. After hearing suggestions and feedback from Professor Krom and other students, I reconsidered the feasibility of the project and decided to focus on one scene I want to create from the movie.

The concept I want to focus on is fear and mystery. Hence, I choose the scene where Alice meets the Cheshire Cat. I name the project “Wonderland Syndrome” as I want to evoke a feeling of confusion, fear,  and impossible to escape just like a syndrome for anyone who experiences it. The overall scene would be dark with the only source of light coming from the moon and a few fireflies. There is only a dead tree standing near the canyon, reaching the moon on the sky. This will create a sense of loneliness and desperation. On a branch of the tree, we will find the Cheshire Cat sitting there with its signature scary smile. There is only one straight (or curvy, I haven’t decided yet) road in the whole scene that leads to nowhere. Please find the new sketch below.

New sketch of my project “Wonderland Syndrome”

I also have another idea for my first project, which is completely not related to the first idea. For this new idea, I want to recreate our NYUAD campus scene but with giant campus cats (with different postures). The idea came when I was walking back from D2 and saw different cats laying down or chasing each other around in the space in front of D2. However, I am not sure if this is a good idea for this first project and if it is possible to find the assets for the project. So any suggestion on this would be appreciated!

2. Project Implementation:

I started laying out my virtual space in Unity. I put the mountains and the dead tree in the scene. I am still struggling with adding the light source for the moon, looking for additional assets, and envisioning the whole idea in the 3D space.

Very first attempt to put the scene in Unity

UPDATE 16/2/2020, 9:29 pm

I finished the basic layout for my virtual space. From Professor Krom’s suggestion, I put multiple cubes of different lengths with decreasing size towards two ends of the path to create a sense of infinity. I also fixed the camera rig to match up with the scale of the mountains and trees (instead of putting y = 2, I put y = 4).

To be added/fixed: Cheshire cat, the lighting of the moon, the color/lighting of the sky, music (if possible)

UPDATE 17/2/2020, 7:41 pm

The idea of my current seemed to diverge from my original idea of the scene where Alice meets the Cheshire Cat. Instead, I tried to create the two different vibes along the path separated by two posts: one [1] side reflects a safer sense, the other [2] reflects the opposite with a deadly sense. In order to do that, I changed the lighting of the scene where the [1] side is much brighter than the [2] side. Also, the [1] has dense forests with a greenish platform, and the trees start to sparsely populated towards the [2] side and the green color of the platform and the trees also start darkening.

To be added/fixed: the spotlight in the [2] side, light emission from the 2 posts, and sound of strong wind (if possible)

UPDATE 18/2/2020, 8:19 pm

After meeting with Professor Sarah Krom, I decided to fix my scene a little bit. The first change is the lighting. I decided to decrease the intensity of the main light source since it did not quite fit with the night. I removed the two posts since they were not really relevant in the picture. Instead, I added two tombs from the asset package, gave them some blue light to evoke a sense of mystery and fear. I also added mountains far away to suggest the feeling of remoteness and emptiness. I added some rocks along the path to make it less flat than it was before.

On the other side of the road, I added some rocks with grass on them. I also lit up the fireflies so it evokes a sense of brightness and peace matching with the stars on the sky.

In general, I tried to make both sides of the path opposite in the sense and the feeling. To be added/fixed: sound of wind

UPDATE 18/2/2020, 10:22 pm

Sound of wind added. Project submitted.

Reading response to Hamlet on the Holodeck, Ch.3: From Additive to Expressive Form, Janet Murray

In the chapter “From Additive to Expressive Form: Beyond ‘Multimedia’,” Janet Murray discussed the ideas of additive forms and expressive forms, stating that “additive formulations are a sign that the medium is in an early stage of development and is still depending on formats derived from earlier technologies instead of exploiting its own expressive power” (p.83). Indeed, additive forms, such as narrative films, are merely dependent on the available technology without any further exploration into possible changes of physical properties. Meanwhile, expressive forms are the result after a long aggressive process of discoveries, inventions, and adoptions, extending the horizon of the current digital world. 

From my perspective, without much prior experience with virtual reality (VR), I believe that our today’s idea of VR must also start as a simple additive form, and has been developed throughout history to achieve the current much more expressive state. VR falls in between additive and expressive forms, as it is really challenging to define VR exclusively belonging to one. It is more of a process where VR is heading towards more innovations in technology and bringing about a better expressive virtual environment for the users. Just a few decades ago, the concept of 3D or VR was still new to most of the people. Now, VR is hitting the mainstream – hundreds of companies are working in creating and improving VR technology, adding on devices and features. VR is existent everywhere, in games and films, and it is what makes the journey of VR development even more appealing. There has been a long way since its start, and I firmly believe that VR is advancing towards a more expressive form.