Project 3 Documentation | The Fall of Octavia

Project Description

The Fall of Octavia is an immersive experience in Unity that depicts the demise of the fictional city of Octavia. First described in the novel Invisible Cities (1972) by
Italo Calvino, the city of Octavia is said to be hung across two steep mountains above an abyss by ropes, bridges, and chains. Due to such a precarious nature of the foundation of the city, it seems like the city’s inhabitants all know the city will come to an end, sooner or later.

Adopting the perspective of a mother’s journey to find her daughter in the falling city, the experience invites the player to a precarious situation where the player must navigate through. The experience, thus, is two-fold. Firstly, the main objective is to find the daughter among the inhabitants of the city. Secondly, through the task of traversing the city to find the daughter, the player immerses themselves in this strange world of Octavia, trying to make sense of the city, its geometry, its direction, its motion, and its people in action through environmental storytelling elements.

The experience is open-ended. There is no time limit. Even though parts of the city are destroyed over the course of the experience, the player is welcomed to explore the city at their own pace. However, once the player crosses over to the mountain, the rest of the city will fall, signalling its eventual demise happening right in front of the player’s eyes.

Process and Implementation

We started off with the ideation phrase. Having unanimously agreed that we did not want to create an escape room experience, we were left with recreating a city from Invisible Cities or an apocalypse. How about combining both of them: we were all impressed and intrigued by the city of Octavia, whose unique foundation puts it in a precarious situation that would lead to eventual destruction. Octavia’s unique geometry would also leave a lot of room for environmental storytelling elements.

Some illustrations of the city we found

Originally, we came up with an idea in which the main character (the player) had to save the day when the spider that laid the hanging foundation of the city had returned and threatened to destroy the city out of vengeance (for occupying its net). However, after pitching the idea to everyone, we realized that the incentive of the experience were not explored sufficiently and would be hard to do so. Would the goal be cutting the ropes that hold the city and thus destroying the entire city AND the spider along with it (which is the only viable way to fight against the enormous spider)? Wouldn’t that go against the idea of saving the city in the first place even though the effort would save the people who already made it to safety across the mountain? But then would we have to explain that the spider would now try to kill the people who already made it to safety instead of destroying the city itself?

Our very first sketch involving the spider

The idea turned out to be a rabbit hole. After much efforts into patching the idea together, we decided to come up with another incentive for the main player to traverse the city: a mother’s journey to find her daughter in a chaotic, gradually disintegrating city and get to the mountain before the entire city collapses. The task of finding the daughter would give the player an incentive to put active efforts into exploring the environment and take in as much as possible what it has to offer, as opposed to rushing to the finish line. We also narrowed down the main navigable route of the city to be a straight line (which is of course decorated by out-branching bridges and ropes which act as the backdrop for the city), with the reason being we did not envision this experience to be similar to an escape room where the player needs to take different paths to find something.

Our sketch for the refined idea

We also envisioned some sorts of interactions between the player and the NPCs to further detail the environmental storytelling elements. Instead of asking the player to carry out a traditionally proactive interaction (either by pressing a button or holding a gaze), we decided to take in the feedback from Sarah and let the NPCs unravel their animations when the player came close to them, i.e. them being in the player’s field of vision. Because looking is an active choice in a virtual reality space, the player sees the world and the story that the NPCs deliver through their emotions and the reactions to the presence of the player. For instance, a woman who can be seen praying from afar begins to cry and lash out of frustration at the player upon coming closer to her. Another example is an injured man who is trying to run away from the crippling city drops dead when reaching the player’s field of vision. The act of seeing the man drop dead is an active embodiment of the dying city itself: the player is surrounded by the process, not the just the aftermath, of the destruction. The act of seeing the man drop dead evokes more emotions than just seeing dead corpses lying around, which leads us to the finale: after making their way to safety, the player and the daughter will witness the eventual obliteration of the rest of the city, wrapping up the entire experience.

Early on, my main responsibly was to create Non-Playable Characters (NPCs) and their animations. A more detail account of the process can be found on my developmental journal but the process can essentially be summarized as:

  • Using Adobe Fuse for 3D character creation
This image has an empty alt attribute; its file name is Screenshot-34-1024x576.png
  • Rigging in Mixamo to applying animation -> importing into Unity
This image has an empty alt attribute; its file name is ezgif.com-video-to-gif-4.gif
  • Tweaking the materials properties so that the character look as good as intended in Unity
  • Creating Animator Controllers in Unity and scripting them so that they change animation clips when the player is in the vicinity
  • Creating a special script for the daughter so that she follows the player

This was where things got trickier. At first, it was a straight-forward process to rotate the daughter so that she always faced the the player before she was picked up; it was also easy to anchor her rotation and position to the player’s respective data when she started to follow the player (using transform.position and transform.rotation). While the player itself utilizes the NavMesh and thus cannot walk through walls or other objects, the daughter is caught in a weird position (no pun intended): she would knock over walls and other objects if she is set to be isKinematic or alternatively she would be knocked over upon collision with other objects set to isKinematic. Unfortunately, when both were set to isKinematic, she could walk through them as if the Collider was not applied. The goal was to have the daughter AND the player stop when either of them hit any obstacle, which proved to be difficult. It might involve passing information around different scripts (daughter and player) when collisions happened and moving the daughter using Rigidbody instead of transform.position. I had to settle with the current imperfect implementation as I had to move on to other components.

The opening scene

As some suggested adding an introduction to our experience, we decided to do a flyby over the city while narrating the situation in the beginning. The narrator briefly introduced Octavia and its unusual traits while setting identity for the player (mother/parent) and setting a goal to incentivize the player to traverse the city (find daughter and head to the mountain). Text narration was discussed at some points; however, we agreed it would be competing for player’s attention with the moving scene of the city below. The only text element in the entire experience is the title of the experience which is displayed when the narration finishes.

Reflection/Evaluation

Overall, I am happy of the what the project turned out (or not turned out) to be. During the ideation process, I would often feel overwhelmed because of the scale of the city. Thankfully, with the simplifications we made with regards to where the player can go and what the player can interact with, the experience still retains its most essential essences while we managed to strip down unnecessary elements.

That being said, however, if I were to nitpick, there would be still some aspects of the experience that can be improved upon given enough time. Having said before, I would love the daughter character not to walk through objects like a ghost. Also, having some more characters wouldn’t hurt, especially having some NPCs make it through the bridge to safety; for now, the player and the daughter are the only sole survivors of the city. While such a scenario in itself is not entirely impossible, it would be a bit more realistic to have other survivors in the mountain congregate to witness the demise of the city as well.

Project 3 Developmental Journal | The Fall of Octavia

For the final project, Ellen, Steven, and me decided to create an impression of the city Octavia (in the Invisible Cities by Italo Calvino) as a backdrop for a mother’s journey to find her daughter in the midst of the city falling.

The city of Octavia struck us as being precarious, with its foundation being held by spider-web structures made of ropes and ladders across a deep valley sandwiched between two tall mountain peeks. It was believed that the fate of the city was certain: one day, the ropes would snap and the city would fall, bringing with it the people of Octavia.

Illustrations of the city of Octavia. These serve as a nice departing point for us to construct the city in a 3D environment.

Initially, we envisioned the city to be destroyed by a monster spider that laid the spider-web foundation of the city in the first place (on which houses were built upon by Octavians). The spider came back to its spider web only to find out that it was already stolen by humans; thus, it was convinced to take vengeance and destroy the city. The main character now has to save the city.

However, after sharing the idea with everyone and received mixed feedback on the incentive of the main character (whose motive was not clearly defined as to whether to save the city or to save the people), we decided to ditch that spider idea altogether.

Initial rough sketch

Instead, the story now involves a mother and her journey to find her daughter in a chaotic, gradually disintegrating city and get to the mountain before the entire city collapses.

A big part of the experience will revolve around the environmental storytelling. Along the way of finding her daughter, the mother will witness parts of the city (ladders, debris…) slowly fall down into the abyss while people frantically try to escape. Some people cry. Some people yell for help. We hope to be able to capture the essence of the city and its people in its demise.

We have yet to decide whether the mother’s journey should be simple (along a linear path that has been predetermined and events can be thus linearly triggered) or more complex (that allows more degrees of freedom). We have to take into account the limiting factor of the Google Cardboard when devising the storyline of the experience. Also, we should think about whether or not to keep a time limit (which will also dictate that there are two outcomes: the mother find her daughter just in time and make it alive, or she fails and falls with the city), or not (which implies that the user can take as much time as possible to accomplish the task, which then begs the question of what keeps the user incentivized to proceed through the experience with the kind of emotion and pressure that a mother in that situation would feel).

[Updated] April 22, 2020

After many discussions and feedback, we decided that the mother’s/father’s journey to find his/her daughter in the falling city of Octavia to be quite linear.

The 360 revised sketch of the scene. It can be seen that there is one main long stretch of pathway across the city, connecting the two mountains together. From there, houses on smaller platforms branch out. The main pathway will be the only navigable space within the scene, which considerably helps simplify the experience.

[Updated] April 29, 2020

I took on the role of creating the Non-Playable Characters (NPCs) that would help to further depict the demise of the city by incorporating human elements to the environmental storytelling of the experience.

As we picked the Medieval Unity package for the houses and animate objects, it is clear that the characters should have the same aesthetics.

The scene’s Medieval aesthetics

The majority of Medieval characters we found are either too expensive or too few to work with; therefore, I try to create them from scratch.

Even though I have experience in Adobe Fuse and Mixamo to create characters with compelling animations, I stumbled upon so many challenges:

  • Adobe Fuse’s aesthetics is too modern for our project. I could only find one such costume online. Even then, this costume proved difficult to work with down along the line.
  • First, the original costume file (.obj) froze up Adobe Fuse because it contains too many faces (more than 20,000). I had to resize the file in Blender to a roughly a third of the original size so that it could be imported into Fuse:
  • Even then, when importing the costume into Fuse, something weird happened:
The costume doesn’t cover the character’s body completely no matter how hard I tried to reconfigured the body’s size. It seemed that there was something wrong with either the original file or with the resizing process.
Not to give up, I tried remodeling the costume file in Blender…
…which proved to be working. However, I had to go back and forth between Blender and Fuse multiple times because there were no way to see how the costume fit into the character’s body.
After many iterations, I finally got something that looked acceptable!
This will be the daughter character. Below are some other women characters:

For now, I think I am done with women characters. The next step is to find suitable Medieval male clothes and create some men characters. After that, I will be able to plug the characters into Mixamo to animate them and bring them into Unity.

[Updated] May 3, 2020

After modeling the characters in Fuse and animating them in Mixamo, I imported the character into Unity. The import process, however, was far from straight-forward. I had to tweak around with the material’s different rendering process to pull out the most accurate depiction of the character.
Here is a before-and-after example for the eyelashes.
An overview of all the NPCs (of course, they would be scattered across the scene later on)
An overview of NPCs’ animations. Most NPCs have two different animations: a default animation that plays from the scene’s beginning and a secondary animation that is triggered if the player comes closer to them (within a radius of let say, 20). The idea of such “interaction” is to free the player from having to concentrate on gazing or clicking on the NPCs; rather, the environmental storytelling told by the NPCs naturally unfolds as the player walks across and explores the city

[Updated] May 8, 2020

This is the place where the daughter will be hiding. I put some fences around so as to guide the player to approach the daughter at a right angle. I will add sound cues to suggest that the daughter is behind the fence when the player approaches this point in the game (the sound of the daughter crying / shouting for help). The daughter starts waving at the player as soon as the player comes to the other side of the fence, i.e. closer to the daughter. The daughter will also shout out “Mom…mom… come closer” to suggest the user coming even closer to her while she waves. As soon as such interaction is achieved, the daughter will “stick” to and move along with the player for the rest of the game. I also added a script that makes the daughter face towards the player when she waves.

[Updated] May 11, 2020

We received some feedback during the play-test session regarding introduction to the scene, i.e. setting up and giving the audience some backgrounds on the city and what is going on. We decided to make use of a flyby camera to show the entire city from above while narrating the city as a way to open the scene. While the implementation of another camera in the scene was easily achievable, we encountered some problems integrating it to the existing scripts that are using the other First Person character controller and camera, which have all been resolved.

“Octavia, the spider-web city, is nested between the slopes of two steep slopes, suspended by ropes, chains and bridges. Suspended over the abyss, the life of Octavia’s inhabitants is less uncertain than in other cities. They know the net will last only so long. The fateful day has come, the bridges are breaking and the ropes snap, leaving the inhabitants to face their city’s terrible doom. Find your daughter, the one thing you value in the city, and flee to the slope on the opposite side to safety… ”
I used a black image on top of the canvas to implement the fading effect when changing scene by changing its opacity
This is the code for the camera flyby. Originally, we kept track of the z position of the camera to determine when to terminate the flyby and change to the first person controller. However, this proved to be unreliable as sometimes it would reach the pre-determined z position too soon and terminate before the narrative stopped (because of the inconsistent Update calls). Thus, I did some research and use Time.deltaTime to keep track of the real time in seconds, thus syncing with the narrative and the speed at which the flyby camera moves.

Invisible Cities: Octavia

Octavia is a thin city, both literally and figuratively. It is literally held between two steep mountains by ropes, chains, and ladders, surrounded by nothing but the abyss. Its entire fate depends on those two thin contact points between its vast network of spider-web infrastructure and the mountains. Octavia may thrive with “terraces like gondolas”, cable cars, and chandeliers, but its inhabitants know well enough that such thrive comes with great demise. The question is not if but when the city will collapse into the abyss between the two steep mountains, when the city will lose its quasi-stability, when some ropes decide to snap and set up a chain reaction that brings down the entire city.

An artistic rendition of Octavia

Octavia reminds me of Dubai and other big cities in the Gulf regions, not literally (maybe in a parallel world when this ever happened) but metaphorically. If Olivia’s foundation is a net between two steep mountains, Dubai’s foundation is the discovery of oil in a hostile environment of the desert, neither of which offers a permanent sense of stability and certainty. Although Dubai has already moved on from oil and diversified its economy, its physical foundation has been laid: skyscrapers and condominiums that run on air conditioners and desalinated water pumps. An engineer once told me buildings that are over 6 storeys high is unsustainable here in the region. Good luck trying to find one, except for traditional houses from the era when the city was still a small, sustainable fishing village.

How Dubai should look like

If there is one thing Octavia’s residents should do, or should have done, it would be to build using light materials, or to reinforce its ropes, or to not build such city at all. The same thing can be said for Dubai: a city of such size and infrastructure should not have been built in such an environment. Dubai’s fate will lie in the changing climate that drives the desert city increasingly more inhospitable. Once the air conditioners run out of electricity, once the pumps run out of water, Dubai’s abyss will become apparent.

Project 2 Documentation | Boxing with Ares

[Updated March 28 2020] Added to Documentation Category

Project Description

Boxing with Ares is an immersive experience in Unity that invites players to a dark and eerie world where they will have to navigate through their internal conflict of peace and war, of hope and sorrow. An inviting big red punching bag placed in the center of a gloom, obscure, and desolate ground that is actively contrast by a sky filled with grids of smaller punching bags seemingly blending with bloody clouds streaks: what could go wrong, what other ominous thing that could happen here?

Unknowingly to the players, dozens of doves fly out from the punching bag whenever it is punched. That is simply not how punching a bag works in real life. The act of punching something is supposed to be a violent act: how could this make sense with such a symbol of peace, how could such two antipodes co-exist in the same world, let alone in the same interaction. Taken back by the unexpected interactions, the players then face their internal struggle of interpreting such encounters: whether to keep punching or to stop the violent act, whether to spread peace by setting the doves free or to let hope die out by chasing the doves away…

Process and Implementation

The very first step of brainstorming for this project was to come up with an everyday activity through which we would modify in accordance to the alternate world. Someone yelled let’s do boxing, I did not remember whom, but the idea was so captivating that we went full force with it. The word “magic” somehow popped up in the conversation and somehow I said “What if birds fly out of the punching bag like when they magically fly out of a magician’s hat?”. Instantaneously, something clicked: we realized that if the birds were to be doves which have long been symbolized peace, they would unexpectedly counterbalance the suggestively violent act of boxing. That they would open up so many questions revolving around peace, war, and the agency of the player, his/her internal struggle between the good and the bad. There would be a button that could be pushed to change the color of the doves flying out (this, however, quickly proved to be an idea made hastily and did not blend well with the rest of the experience).

Initially envisioned for the Vive system, we intended the interaction to be organic, analogous to that required to punch a bag in real life: the player would have to pull the trigger while holding tight to the controller (which resembles the act of clenching a fist) and accelerate their controller/hand forward towards the punching box.

The act of punching a bag. This is also the asset we found on Unity for the experience.

Also, we thought that we could envision a theater environment in which the player is given a platform to perform their internal struggle between peace and war.

This is our first sketch of the experience

However, after receiving feedback from Professor Sarah and our classmates as well as the breaking news of the coronavirus that would have a big impact on how we designed the experience, we revised and narrowed down our initial idea, specifically:

  • The interaction would only involve the action of punching the bag
  • The theater environment would be changed to a less context-based and more provocative space. We took inspiration from this scene from the Matrix, in which the environment does not guarantee any concrete clues as to where the player is currently situated, a place that is not defined by the conventionals.
  • We also took some more inspirations from this set-up. We would want some fogs in the environment, as well as smaller punching bags randomly hung from the sky, without really making any sense as to why they are there in the first place, opening up possibilities for self-interpretation and self-reflection from the player.

With the developed idea in mind, we started to work on the project. Neyva took charge of the environment, while Nhi and I worked together on the camera, the character, the interactables, and the interactions.

We decided to put the main camera on the character in such a way that the player can see his/her hands. As we could not use the Vive anymore and thus its associated in-screen controllers, being able to look down and see his/her hands provide a visual cue that interactions through the hands are possible. We however limited the angle which the players can rotate down, as we did not want the player to be able to look through the boxing man’s body. Lastly, we made the camera and the boxing man children of the First Person Controller so that they can move in tandem with the inputs from the player. This is about as far as we could implement the experience we intended to be without Vive.
We then added a script to detect the collision between the boxing man’s hands and the bag. We could not rely on the default Collider alone because we had to check if the collision is indeed caused by the punching action, not by accidental touches caused by proximity to the bag. After detecting the collision, we added a script containing a class Bird to generate birds flying out of the bag. The birds are generated with random positions, random angles, random velocities (using Rigidbody and AddForce function) and animation speed correspondingly (the faster the velocity, the faster the animation – the flapping wings animation).
The bird prefab upclose. We played around with its color and ended up to choose a white-grey-ish tone that suited well with the monotonous tone of environment while not overpowering the experience. The addition of the moving birds somehow provide the scene with a lot of contrast, which is predominantly made up of stationary or slow-moving components.
The punching bags in the sky placed by Neyva.
The clouds in the sky placed by Neyva. Originally, they were white; however, after toying with the skybox a bit, we decided to set them to red and made the environment even more mysterious, hell-ish, dark, and cruel.
The ground fog effect (particle system) created by Neyva

Project Reflection

Overally, I am satisfied with what we were able to achieve in such a changing and challenging situation.

First of all, I can feel a sense of an alternate world being presented in the experience. From the very prominent cue of a dark, ominous sky without any light coming from the sun dotted with bloody streaks of clouds, to the less-so-stand-out desolate layout of the immediate environment with only a lone punching bag on the ground (or even a lack thereof of a definitive ground, only a featureless plane that extends and seamlessly melts into the horizon), to the omni-present ambient sound that suggests a rather unsettling tone of the environment: everything works together to transform the player into an alternate world that one might imagine but is too scared to face it himself/herself.

Moreover, having the ability to see his/her hands (or rather hands with a pair of red gloves on them) right from the very beginning of the experience immediately reminds the player of the possibilities of utilizing them for potential interactions. Apart from the smaller punching bags that are placed way too high for any conceivable interactions, the one and only thing that are in the reach of the player is the inviting big red punching box that is a few feet away. It is obvious that something, unexpected or not, will happen upon interactions between the hands and the punching bag.


One small thing to note, however, was a lack of the volumetric spotlight that shines above the punching bag. While it was functional in the Unity project, when we exported it into an executable app, the volumetric is nowhere to be seen. While it originally served to further emphasize the importance of and as an invitation to the punching bag, the lack of it in the final app did not really have a big impact on the experience as a whole: the aforementioned features are enough to act as affordances for the experience.

The end product came quite close to what we envisioned for our project. In some way, it exceeds my expectation, it feels both more real and more alien than I could have imagined before. While the immersiveness of the medium lends itself nicely to the experience, giving the player the freedom to explore and interact, it also presented us with challenge to put things where they should be. For example, we decided to offset the punching bag quite a bit from the initial position of the player so that the player can have a grasp of the environment as a whole before delving into the interaction. This was met with positive comments from our classmates, citing it gave them a pause to think about their actions, to whether or not incite more violence by punching the bag on the already violent environment.

Agency Question

The very first thing that ensured the player with an agency in this experience is the ability to see his/her hands right from the beginning. What’s more, the hands are barely in their naked forms: rather, they are inside a pair of red boxing gloves, which imbued the player with an elevated kind of agency, the kind that comes with capabilities specific with boxing gloves. The sight of a matching red punching bag afar immediate after that inevitably welcomes the player to come closer to materialize the thoughts of actions that were triggered earlier upon seeing the boxing gloves for the first time. The satisfying sensation is derived from the ability to punch the bag (either through a mouse click which is implemented here, or with an actual forward movement of the controller while clenching a fist initially imagined for the Vive ) and see the bag responds to the action through its change of position and speed in space and time as well as auditory cues (impact sound). Moreover, more than an expected displacement of the punching bag, the player is surprised with doves flying out of the bag every single time it is punched. It is at this moment that the player realize they can not only physically influence, but they can also extend their bestowed agency on innocent doves somehow “trapped” in the bag, to decide either to set them free with a view to spreading hope outwards or to keep them inside, trying to hold on to the last bit of hope in this dark environment.

Project 2 | Document Journal

<Scroll down for the oldest post>

[Updated] March 15, 2020

We began to integrate our interactions (the camera, the punching bag, the boxing man, the bird) into the environment that Neyva created. By all means, there were some difficulties trying to put together everything and we had to manually place every object that we created altogether. There were some compile errors and some glitches from Unity when it came to the particle for the fog, but we overcame them after all. I found a sound piece that fits nicely with the ambient environment of the experience here. We also played with the color palettes of the environment and its associated objects and ended up to color the bird white-gray-ish, the cloud red, the punching bags red, and the punching glove red.

[Updated] March 13, 2020

We wrote a script attached to an empty game object that clones new bird prefab every time the player punches the bag. We created a class Bird that hosts the initial conditions of the bird (position, speed, angle, animation…) and the GameObject Bird itself. Initially, we used Character Controller and its Move function to move the birds; however, because the Character Controller component comes with a Collider by default and no matter how hard I tried to disabled it, it still collided with the punching bag, causing unwanted bag swing that doesn’t look good.
Therefore, I did some research and came up with the idea of using RigidBody and AddForce to move the birds instead (without adding a collider to avoid collision with the punching bag), which works like a charm. For some reasons, the volumetric spot light above the fog did work in the Unity sketch but was nowhere to be seen in the exported app, which we could not fix and have to say goodbye to :'(

[Updated] March 11, 2020

We added a raycast script to the camera so that if the player is looking at the punching bag, it would become brighter, signalling to the player an invitation for possible interaction. Along the way, we encountered some challenges that wasn’t brought up in the original example of raycast.
Firstly, because the punching bag is a child of some other components, we had to use hit.collider.transform.gameObject instead of just hit.transform.gameObject (which returns the parent of the punching bag instead).
Secondly, because the material used for the punching is not a simple color, thus we had to use emission color to alter the brightness of the punching bag.

[Updated] March 10, 2020

Neyva is working on the environment while Nhi and I work on the player and the interaction with the punching bag.

Below is a video of the working punching bag (the environment is a placeholder, not the environment we are creating for this project):

After some experimenting with the character controller, we decided to settle on the first person controller from the Standard Asset and make the boxer a child of the controller so that he and the camera will follow the movement of the controller.
The Unity package comes with a boxer and his associated boxing animation. As this is a first person perspective experience, we intended to keep only his two hands in gloves in front of the camera as an affordance for the interaction with the punching bag. However, we couldn’t dissect those from his entire body rig, which is read-only; therefore, we put the camera just outside of the body and restrict the camera in such a way that the player can only see the hand movements (not the entire body).
Having some experience in Unity animation, I did some digging and found a way to trigger the boxing animation on click. I also slowed down the animation and configure it so that it can only be triggered again once it finishes (to avoid animation reload when the player clicks too fast). We also added a collision detector script on the punching bag and passed around some variables from different scripts to determine if the collision is from the punching action, not from accidental touches (in case the player gets too close to the punching bag and touches it with the hands). We also added sound effect upon collision.

[Updated] March 09, 2020

After having some more meetings with our team, taking into considerations feedback we received in class and the constraints we are now facing due to *cough* COVID-19, here are some revised aspects:

  • We will ditch the button; rather, we will focus on the main interaction between the player and the punching bag. The dove will fly out of the bag upon being punched as before.
  • We will ditch the theater. The environment will be simplified down to its barest elements. We took inspirations from the scene below:
  • The main cylindrical punching bag will be in the middle of the scene, while being surrounded by smaller punching bags held in place in the sky. Fog will be used to create a mysterious atmosphere. The environment will remain mostly dark, unlit; there will a volumetric light illustrating the main punching bag.

March 4, 2020

For the second project, we (Neyva, Nhi, and me) adopt the the struggle between war and peace as the central theme of our experience.

Storyboard

The experience is set up in a theater stage where the user is placed next to a punching bag and a red button on a pedestal, corresponding to two everyday activities of punching/boxing and pressing (buttons). What makes the environment alternate, or surreal, is the way those two objects and the environment responds to the user’s interaction:

  • To punch the bag, the user will need to press the trigger to form a fist with their hand and accelerate their fist towards. Upon being punched, white doves will magically appear from the punching bag and fly around the stage.
  • To press the button, the user will need to press the trigger while aiming at the button. Upon being pressed, the button will magically turns all the doves black.

The theater stage thus presents the user with an alternate space to perform his/her relationship with war and peace. The mere action of violence of punching is counterbalanced by white doves which have long been symbolizing peace and aspiration for peace. On a similar note, the symbolic action of pressing the red button, which is often quoted as the threat of global war, is materialized in the transformation of white doves (peace) to black doves (war).

Below are some Unity assets and some reference images we’ve found so far:

A cinema theater asset in Unity store. We will try to convert it into a theater theater.
A punching bag asset in Unity Store.
Boxing gloves royalty-free 3d model - Preview no. 1
A pair of boxing glove found on Free3D. We will try to replace the Vive’s default controller with these.
Image result for punching bag

How Would a Response in VR Seem Intelligent

In Krueger’s Responsive Environments paper, he argued that in order for an interactive medium to respond intelligently, “it must know as much as possible about what the participant is doing”. It is important for the computing machine to obtain as much multi-sensory information about users inputs as possible to use its algorithmic process to produce a corresponding response. The way in which the medium responds also reflects its intelligence, be it to the users’ position, velocity, or a change of shape.

Expanding that concept into VR environments, I believe that collecting users inputs in great resolution and accuracy is imperative to intelligent responses. Users inputs here can be headset position, rotation, velocity, and acceleration. They can also be from the controllers (all the data mentioned above, plus click detection, drag detection…). For more premium VR headset that employs spatial tracking, users’ position inside and outside of the environment can be utilized.

One potential possibility that promise to be a game changer is the ability to track users’ eyes, which opens infinitely many doors for novel interaction and responses as this mimics how we visually perceive in real life. One example for an intelligent response with regards to eyes tracking is foveated rendering. Foveated rendering uses algorithmic processing to render areas where the user is looking at at a higher resolution than the periphery (which will be blurrier), which produces a more realistic VR environment while saving bandwidth, thus achieving a faster response time. As users move their eyes around, the focal area changes accordingly in a timely manner, thus an intelligent response.

Project 1 Documentation | Our Small Existence

Description

Despite the fact that the world is ever-increasingly connected, I feel increasingly lonely and small amidst all the chaotic events taking place all over the globe. The imminence of global warming, of global wars, of global inequalities… threatens to put an end to our civilization, to wipe away the existence of everything on the small rock we called Earth. Not everyone is yet to realize that our existence is so small that it does not even outlast a blink of an eye compared to the scope of the universe, of space and time. Does everything that happens here on Earth matter?

Adopting this rather bleak outlook on our existence on Earth, I created an alternative reality experience in which the last human being stands on the surface of the moon, looking back to an explosion-stricken Earth, surrounded by nothing but lunar’s deserted terrains and relics of human’s lunar exploration. Throughout the experience, I wanted to emphasize the smallness and the emptiness that one feels when presented in such an environment, and that maybe it can make an influence on how we see our fragile existence on Earth.

A snippet of the immersive experience

Process and Implementation

This is the first sketch I drew for the scene.

Regarding the design aesthetics of the environment, I chose to stay as faithful as possible to people’s perception of the moon through realistic rendering of its surface and terrain albeit with a surreal element of an explosion-stricken Earth. By combining the two ends of realistic-ness, I hope to create an experience that is almost real, but not quite, something that has the possibility to happen in our current understanding of reality, perhaps in the far, dark future.

It was clear from early on that the “front” of the scene (where the viewers can see the Earth) will be mostly flat, dotted with some relics of previous lunar exploration (a lander and a flag). The terrain transitions to a more rocky, mountainous region that wraps around the viewer’s position towards the “back” of the scene, raising higher and almost touching the sun on the sky, which heightens the insignificant existence of the viewer even more.

The lunar terrain was built with the Terrain tool in Unity by using a normal map of the moon as well as the tool’s multiple terrain brushes. Cement was used for the material of the moon surface, and it suited perfectly.

An overview of the terrain. The viewer is situated in the middle (the flat land)

In the front view, lunar exploration’s relics (a lander and a flag) can be seen. If the enormous distance between the Earth and the moon creates a spatial separation between the person and the rest of the human race, these space relics added a second layer of human separation to the piece: temporal separation. The only other times when there were other people sharing such an abandoned place and such an isolated feeling was back in the 60s-70s during the Apollo program. The person is lonely, both in space and in time.

Compared to the first sketch, there were some changes to the scene. First of all, the position of the sun was pushed to the “back” of the scene so that it could illuminate the “front”. If it had been at the “front”, it would have cast shadows on the flag and lander, making them impossible to be seen clearly.

Due to a lack of atmosphere and thus a lack of ambient light scattering, the sky is black and anything facing away from the sun is unlit (the mountain range in this case). Therefore, I moved the sun to be in the “back” to light up the front objects.

Also, I originally envisioned the Earth to be slowly disintegrating into pieces, instead of smaller explosions on the surface as in the final product. Such slowly disintegrating Earth would have a stronger impact on the physicality of the scene by providing viewers with a nearly-frozen capture of the Earth in its most vulnerable form. By not showing the direct cause that makes Earth disintegrate, I hoped to ignite a sense of mysteriousness and unsettling.

Image result for earth disintegrate

However, after toying around with the Dissolve shader, I realized that I had to switch every other object’s shader to Lightweight Pipeline as well, and when doing that, a lot of them did not retain the original materials. That’s why I ended up using the particle systems to create smaller explosions on the surface of the Earth in hope of achieving the same effect of showing the viewers how small and fragile our existence on Earth is and that maybe we should embrace and appreciate it more.

The explosions made with particle systems

Reflection and Evaluation

Overall, I am moderately satisfied with the environment albeit a failure to produce a more enticing/dramatic disintegrating Earth. Also, while there was a failure to produce a sky filled with stars and asteroids flying around (the procedural sun skybox did not allow me to add another 6-sided skybox filled with stars), I think it would not have added much to the experience as a whole. The fact that the sky is desolate, except for the ever-present Sun and the dying Earth, amplifies the emptiness of the environment.

The drastic disparity between the expanse deserted environment on the moon and the tiny size of the Earth, and everything that belongs to it, highlights how insignificant our existence is, not only for the lone person stands on the moon, but also for our entire human race. Such size disparity is nicely represented with the help of virtual reality where viewers are not limited to a flat and fixed 2D view but a free 360-degree experience.

Project 1 | Our Small Existence

[Update] Tuesday, Feb 18, 2020

I added a flag and played around with the materials and the normal map to make it look old
I then added a lander model that I found on NASA’s website
Added an Earth. I first built the one on the left using a map of the Earth and a normal map of the Earth’s surface. However, struggling to add an atmosphere to the Earth, I found that there is an Earth asset on the Unity Store that looks a thousand time better.
I then tried to create a Dissolve Shader to apply to the Earth to make it look like disintegrating. However, I ran into this problem with the ShaderRender package. I tried looking it up online and found out I had to change the shader of all the game object to LightWeightRenderPipeline. However, when I did that, some of the materials got reset and messed up the whole scene.
I then tried to use the particle system to create explosions on the Earth’s surface instead. Even though I did not accomplish the disintegrating Earth that I set out, this is still a good alternative or my scene, provided the limited amount of time I have learned Unity so far.

[Update] Sunday, Feb 16, 2020

I started by creating a new Skybox with very a thin atmosphere and small, far away sun. This replicates the sky from the moon surface
I then toyed with the Terrain Tool and its different paint brush options to make a rough sketch of the moon surface.
While the process was fun in and on itself, it was nearly impossible to create realistic-looking craters or impact sites on the moon with the brushes alone
I then stumbled upon a tutorial detailing a tool that let me create terrains using heightmaps
Even though it was realistic-looking, the terrain was a bit feature-less in my opinion, which prompted me to use the brushes to add some mountains and ridges that would serve to guide the viewers
This faces the mountain, which will be the “backdrop” of the environment

Wednesday, Feb 12, 2020

I want to be alone; on the moon, with nobody else; maybe with an abandoned rover that lost contact with Earth a couple of years ago; maybe with a forgotten flag that was erected a century ago to mark the long-gone existence of another human being here. The chill sensation that gushes through my body as I look around the serene and desolate vastland of the moon. The sun millions of miles away look so small, barely shining the dimly gray surface of the moon. There is almost no atmosphere, the sky is a black patch of ink, dotted with lone stars light-years away. I want to immerse in that environment, I want to feel small, to feel lonely, to feel empty.

Looking up, the Earth is exploding into bits, or rather disintegrating, so slowly as if it was shot in slow-motion. What would it feel like to see our home not only from such an enormous distance but also when it is doomed? What would it feel like being the last man to survive? I want to seek answers to those questions.

My drawing probably doesn’t do justice to the image I have in my mind

Some ideas as to how this can be translated into a Unity experience:

  • I should probably toy around with the skybox to recreate the sun: small, distant, weak. Also, the atmosphere is almost non-existent. The lighting should be parallel, but there shouldn’t be light scatter (which makes the sky blue here on Earth), the sky should be black
Image result for moon surface
  • As discussed in class, I should also toy with the terrain tool to recreate the craters-filled surface of the moon. Maybe to create some mountains as well?
  • I want the Earth to be slowly disintegrating, in a slow-motion manner. I don’t want to animate the explosion itself, because then I would somehow have to focus viewer’s attention to the Earth and watch the explosion process. I want the viewer to feel free to look around, and if they look up, they will see the Earth’s bits already slowly floating away from one another
Image result for disintegrating planet

It would look kind of similar to this
Image result for earth disintegrate
Or this

Hamlet on the Holodeck, Ch 3: From Additive to Expressive Form

Even though VR headsets are becoming cheaper and more advanced, till the point that some are cheaper than an average phone, personally, I think the reason why VR is pretty much a niche market is because most of its contents still rely on existing technologies added with a few tactics that have not offered the average user a major breakthrough in the way they experience the medium. As Murray quoted McLuhan, “the content of any new medium is an older medium”, from my experience with VR, I personally think that VR is on its way to become expressive. But for now, most experiences in VR, be it 360 videos, immersive video games, or virtual social network is somewhat more on the side of additiveness than expressiveness.

For example, Murray talked about how filmmakers have exploited the properties of film to cut scenes, change focus, create dramatic effects… and that resulted in a transformation of “photo-play” from a recording technology into an expressive medium. While there exists some 360 videos that use spatial sounds and unique perspectives that engage the users in a constant state of self-location and scene-navigation, the majority of them (and also happen to be the most accessible through traditional video streaming platforms like YouTube) are still shot with the same approach used in traditional film-making. The burden of having to move around the immersive world, in this case, is higher than the added benefit of a wider field of view that can be traditionally replicated by using multiple fish-eyes lenses.