Fall Of Octavia | Project Documentation

Project Description

The Fall Of Octavia is a piece of interactive storytelling designed for the Google Cardboard. The project is based on the city of Octavia as described in the book Invisible Cities by Italo Calvino. The piece takes this depiction further and imagines the complete destruction of the city, providing the user with the task to move through the city and escape with their daughter before it collapses. The city is depicted as being constructed of wooden houses similar to medieval-style homes. It is described in the book as a “spider-web city,” with many floating islands connected by bridges and ropes. As a consequence of the city’s flimsy architecture, the precarious situation is viewed as a part of life in the city, as something that the inhabitants must face everyday. How does this constant threat of collapse and destruction impact the culture of the city? In what ways would its inhabitants react if the city were to collapse? How might these reactions to death differ from the reactions of inhabitants in other cities? The project depicts the gradual destruction of the city, the reactions of its inhabitants and brings the user into this dynamic environment. 

Process/Implementation

We all agreed that we would either do our final project with either the theme of apocalypse or a city from Invisible Cities. Eventually we decided on depicting Octavia and imagining the city as an explorable virtual reality experience. We found the premise interesting because of the architecture of the city as well as the dangers that this architecture would pose to the preservation of the city. We were also inspired by a few depictions of the city that we found online: 

At first our idea was putting a fantasy spin on the destruction of the city. The idea that it was a “spider-web city” by actually introducing a malevolent spider that attempts to destroy the city. In this case Octavia would have been constructed by a spider and built upon by human settlers who were unaware of the spider’s existence or believe it to have disappeared. Because we believed that there were many interesting things we could play around with in the destruction of the city’s islands we found this destruction of the city to be at odds with defeating the spider character. Furthermore, we were unconvinced that this would be a compelling experience because of the restrictions of the Cardboard and the difficulty in creating an engaging way to kill or repel the spider with just the one button. 

Initial sketch of the spider scene

After some more brainstorming and also experiences with the Google Cardboard, we decided that it might be a better idea to focus on the environment rather than a story that might require a more complex interaction system. Two questions came to mind when we decided this: why should we care about exploring the city? How should the user navigate the city with the Google Cardboard? To address this we decided to make the city as one strip of planks with houses on the sides of the path. This was done to immerse the user in the scene, making it apparent that the city is hanging in the sky between two mountains. It was also done to encourage the user to travel along that path exclusively, motivating them to get from point A to point B naturally. Because we wanted the user to also pay attention to the surroundings and the city’s inhabitants we decided to make the motivation for the user finding their daughter through the collapsing city. We realize that this motivation may be a bit superficial, but because of the difficulties with the Cardboard and our desire to make the user focus on the inhabitants, we decided that this would be a good option, even though finding the daughter is an easy task. The user can decide whether or not they choose to save their daughter, with the decision having no impact on the ending of the story. The user is also given no time restrictions. These decisions were made to encourage the user to fully observe the environment while “encouraging” them to seek out their daughter for the sake of the story introduced during the introduction sequence.

Sketches of the scene design for the final idea

 For our interactions we decided to just have one interaction which was clicking on the ground around you to move the camera in that direction. The other interaction was going close to the daughter which would enable her to follow you for the rest of the scene. This was done to allow the user to pause while walking to look around and observe the city. It also becomes obvious that one of the inhabitants is the daughter because she starts waving at the user when they come close. 

My task was designing the city and creating animations for the city’s gradual destruction. Applying what I discussed before I created a strip of land that the user could walk on through the city and placed medieval houses on the sides of the path. I found a great script that broke apart gameobjects into smaller pieces through this YouTube tutorial which I found made the user’s situation in the city more dangerous. With this script I made some objects fall onto the strip and become destroyed at certain points in time while others are cued by the user’s movement into a certain position in the scene. I also added flames and smoke inside several houses and scripted objects and houses to fall from the city’s other islands into the abyss. I also added some post processing effects to the screen to make the appearance of the scene more aesthetically appealing. Fog was also added below the city to create the illusion of a deep abyss and to avoid creating additional terrain/gameobjects that might be heavy on rendering power (the scene was heavy enough as it was!).

I also worked on adding ambisonic sounds to the scene as well as the final destruction sequence that occurs at the end. Ambisonic sounds were added to the fires, to the daughter, to an inhabitant crying and to an inhabitant screaming. This was done in order to increase immersion as well as guide the user to investigate where these cries might be coming from and to pay attention to the reactions and expressions of the inhabitants. The final destruction sequence was triggered as the user stands on the mountain slope and loud rumbling noises can be heard as the city falls into the abyss. 

Reflection

Overall, I was satisfied with the project and my contribution to it. With this project I really felt like I understood how scripting worked compared to other projects, especially communication between different scripts. I did grow frustrated with how difficult it was to create a complete environment that was beautiful and responsive to the user’s actions and movement throughout the scene. Even though I felt like I did a solid job, I felt like a few of the animations looked a bit out of place with the environment. I think that the ideation part of this was where our group struggled the most as we put a lot of thought into how this whole experience would be designed for the Cardboard while also creating an engaging and immersive scene. Our biggest question was: what would we do with the one button? Our answer to that was to allow for the user to move around the scene with little limitations by reserving the button for movement up and down the city and over the bridge. I am not completely sure about whether it was the best decision, but we had a difficult time figuring out what kind of compelling actions could be taken with the click of the button, especially in the context of feeling powerless in the face of imminent death.

Development Journal: The Fall of Octavia

For this project, me, Vinh and Ellen aim to create a narrative experience based on the destruction of Octavia, one of the invisible cities. The reason why we want to depict Octavia in our experience, more specifically its destruction, is because it is described as a city in a precarious situation. This is because the city is suspended in the air between two mountains by ropes and chains.

Calvino writes: “Suspended over the abyss, the life of Octavia’s inhabitants is less uncertain than in other cities. They know the net will last only so long.

Manisha Dusila - BA (HONS) Computer Animation Arts, UCA Rochester ...
Depiction of Octavia

We narrowed the experience down to one inhabitant’s quest to escape the city on one of the ropes (that holds the city together) into the mountain. But first, the inhabitant must find his daughter in the city. This allows the user to experience our interpretation of the city and the realities of the inhabitants facing their city’s doom, incentivized by finding their loved one lost in the city’s streets.

We want the story to illustrate how the city’s dangerous location has shaped the culture of Octavia’s inhabitants. As of now we are leaning towards using the destruction of the city to show their grief of losing the city, and perhaps their lives, but we also thought about giving the city’s inhabitants a more fatalistic attitude towards the city’s destruction. As Calvino writes that this is something that the residents of the city are aware of, we thought it might be possible that the city’s inhabitants would not necessarily resist death and destruction but rather embrace it. This is still something we are considering and we want to portray the destruction of the city as something that causes different responses, just as any disaster would.

We thought a lot about how the user could move using the Google Cardboard. After exploring a few titles with the Cardboard and realizing that most of the interactions of this title did not involve movement to create a powerful experience, we are now leaning towards having a visual cue in front of the user in the direction in which they can walk. When the user hovers over this cue (an arrow, for example) this will bring the user forward. As a result, we currently see this as following one straight path, going through a street in the city until they eventually find their daughter, and then leaving the city on a footbridge to conclude the experience. We intend for the button on the Cardboard to be used for calling the daughter. The user can move along the prescribed trail and look around and press the button to call for their daughter.

Update 4/29:

We have started work on designing the scene of the experience. Vince has taken charge of character animation, Ellen with scripting interactions and movements with Cardboard and me with designing the environment and the destruction of the city.

The style we decided to follow was that of a medieval town. We have one big stretch that contains most of the city as well as other floating components that add to the environment of Octavia.

I have worked with a few destruction scripts I found online that allow objects to be destroyed into many pieces. I also have a game timer that allows me to script when each destruction animation occurs. The difficulty now lies in the exact effects that will happen when the city gradually becomes destroyed. Here are a few screenshots of the scene so far:

Update 5/9:

Over the weekend I worked on adding sounds to the scene. More specifically I worked on getting the sound to be ambisonic, the further the camera is from the game object that is the source of the sound, the more quiet the sound is. For now, we have it attached to the daughter as a prompt for the user to search for the cries that cut through the other environmental sounds (wind and fire). I also added a loud earthquake-like noise that plays as the user crosses the bridge into the mountain, as a prompt for the user to look back and see the destruction of Octavia. I also finished scripting the destruction of the city. To do this, I separated all the contents of the scene into eight different game objects. When the player would cross the bridge, this would trigger the placement of a rigid body on these game objects which would prompt their descent into the abyss.

Apart from the falling objects that are triggered based on the current time of the game, I also made some objects fall when the user gets into a certain distance of the object, making the city’s destruction more immersive and real.

We also worked on importing the Google Cardboard reticle functionality into the scene along with the ground that the user is supposed to walk along.

Invisible Cities Response: The City of Fedora

Time is used in Fedora as an inevitable agent in shaping people’s perceptions and ideas of the city as well as an obstacle to the desired changes in the city that exist in its inhabitants’ imaginations. The creation of the blue globes is someone’s hopeful imagination of a utopic Fedora, but in the time they used to create it, the city changes drastically causing their future to be restrained to a glass globe. In this sense, their realization becomes invalid both through the warping of their perceptions of what the city could be due to the new image of a changed Fedora and in the impossibility of completing their imagined reality because of physical changes in the city. In this sense time stomps out a branch of what Fedora could have been, as the city evolves into following another path while the creator envisions creating a unique branch of their own. 

Time also functions as something that results in the preservation of ideas and beliefs of what Fedora should be. This is done through the use of a museum with the globes, displaying the immortalized visions of Fedora. In a contradictory manner, Marco Polo also hints at time not necessarily being a hindrance or an obstacle resulting in the death of Fedora’s realities, as he notes that these visions are as “equally real” as the stone Fedora of today. This is because the Fedora of today was created or shaped when it was “accepted as necessary” but “not yet so,” as it too existed as an imagined reality before its completion. In this sense, Fedora’s inhabitants’ imagined city is also possible as they all existed at one point as “assumptions” along with the big, stone Fedora.

Fire Planet | Documentation

Description: Fire Planet is an experience/game that takes place on a planet engulfed by never-ending fires. In order to ensure the security of this mysterious civilization on this planet, a dome has been erected surrounding their city as well as powerful water sprinklers to fight off the constantly encroaching fires. The user assumes the role of a protector of the planet, using their mysterious magical powers to shoot projectiles that kill the planet’s flames. In this scenario, one of the sprinklers malfunctioned, causing the fires to move towards the dome. The user is tasked with putting out the fires and reaching the broken sprinkler to fix it and ensure the security of the city.

Brainstorming: We started brainstorming by discussing what everyday actions would be interesting to replicate in VR in an alternate reality. We discussed ideas such as picking up trash, throwing balls and we eventually settled on designing an experience around the action of using a hand fan. This would entail swinging the VR controller from left to right or up and down. From this, we came to the concept of using a powerful hand fan to blow away or put out fires.

Initial scene brainstorm
Image result for temari fan
I envisioned the fan to resemble something like this (not sure if this is what my group mates envisioned)
This is a character from the anime Naruto who swings a powerful fan

Because using a hand fan powerful enough to put out large fires was an idea that is something that is not realistic, we created a concept revolving around firefighting in an alternate scenario. We decided to create a narrative of a civilization on another planet constantly threatened by approaching fires.

Due to the changes we had to make for the project, we decided to simplify the project from waving a fan, which we believed would not be as compelling of an experience with a mouse, keyboard and computer screen, to simply aiming and throwing orbs of particles that would put out the fires. Because of the disconnect between the motion of throwing or aiming and using a keyboard/mouse to do this, we found it hard to really make the action intuitive besides from most people’s engrained experiences of using a mouse and a keyboard to play video games. But in this alternate scenario, the action of pushing the hand forward mimics that of throwing something, and we found it vital to add this animation of the hands to the experience.

Process: We divided the work with me doing the scene design and character animation, Will with particle interactions and Mari with the projectiles. We worked together to bring all of our parts together to finish the project.

For the scene design we all had a desolate planet in mind with no vegetation. From this I decided to make the environment dark and the terrain rocky and dark. I designed the city in the background with a sphere with a transparent material and used some building assets along with some bright lights to have it contrast the desolate environment that the user is standing in. I also added a red spotlight from that emitted from the base of the dome to make the environment red in order to add some urgency to the actions required of the user.

Animating the hands was something that we believed had to be done so that the experience so that the user would feel like they are the firefighter. I initially started experimenting with models from Mixamo but found these difficult to control and I could not figure out how to remove certain parts of the avatar so that it would not obstruct the camera’s main focus on the hands. With Sarah’s advice, I was able to find a way to use the VR hands. I was able to figure out how to add simple animations (point, closed fist, open hand) to the VR hands. I also felt like the default skin of the hands fit nicely, with red and black gloves resembling the outfits that a sci-fi game character would use.

Lastly, I worked on creating the orb that would be shot at the fires. For this, I created a simple particle system using Unity’s Visual Effects Graph and attached this to the projectile game object that Mari made.

To tie this all together, we added a voiceover to greet the user and provide some context for the experience, as well as a closing voiceover when the user completes the task. We also played around with how we could best guide the user towards the sprinkler they needed to fix. We did this by adding a semi-transparent blue cylinder with the sprinkler inside. We also added an animation for when the user would reach it, as the sprinkler springs up and jets of water begin shooting out of it to reaffirm that the user completed the task

Screenshots

Gameplay:

Reflections:

Overall I had a great time doing this project and I am happy with how it turned out. It was difficult to really mimic motions with the VR controllers to a mouse/keyboard experience, and I am happy that we still somewhat stuck with our initial concept because it was something that I really liked. I am happy with how our game design turned out and the cues and instructions we provided the user. Our positioning of the fires and the cylinder ahead of the player allowed us to make it intuitive that the user must go forward to complete some objective.

Agency Question:

I believe that we gave the user agency that compelled them to act in a certain way by giving them the ability to react to the environment, in this case, they have the complete ability to extinguish the fires. Because fire is something that evokes an immediate response of fear and danger, we believe that a user’s immediate reaction would be wanting to put out as many fires as quickly as possible. We spawned the user right next to the fires facing the objective so that their instinct, even without any cues from the voiceover, would be to put out the fires. Furthermore, we placed the fires they must put out in, more or less, a line towards the objective, extinguishing the fires until they reached the blue cylinder. Lastly, we wanted the final interaction of the user entering the cylinder to be a rewarding one, delaying the animation of the sprinkler turning on and the ending voiceover so that it is apparent that the user accomplished their job and successfully controlled the environment’s dangerous fires.

Project 2 Development Journal | Fire Planet

For this project, me, Mari and Will wanted to create an enjoyable, short and repeatable experience that prompted the user to be engaged in a compelling experience unique to VR. We brainstormed many ideas, including flinging orbs at the environment to bring about setting changes, furnishing a home, and experiences revolving around interacting with the environment as an omniscient, floating, god-like character. The idea that led us to this final concept was our discussions of setting the project in a home and how we could create trials and puzzles to achieve a certain outcome. We discussed how a greater urgency could be instilled in the user to not mess up and mindlessly wander around the house searching for various objects. Among one of those ideas was fighting a fire with a fan if the user was not able to perform set actions correctly. From this we attempted to elaborate on how this could be considered an alternate reality and how this action of using a fan to repel a fire can be an experience in itself. 

Ideas brainstormed

From this we developed the idea of having to protect something and yourself from the fire and how fire could be a realistic feature of the environment. After some brainstorming we developed the idea of having to protect a city on an alien planet, characterized by its continually burning fires, from the fires that are creeping up on the city. 

The concept is that the city has been forced to install sprinklers that shoot water to keep the encroaching fires from getting to the city. The user takes the role of a “firefighter,” as they are forced to put out these fires.  However, one of the sprinklers has been hit by an asteroid, causing it to be crushed and wedged into the ground. The user has a giant hand fan which they are supposed to swing to generate wind to push the fire back enough so they can get to the sprinkler, push off the asteroid, and pull the sprinkler out of the ground. This is all explained to the user through a radio voiceover that plays at the beginning of the game. 

Scene: with the user facing the fires they need to put out with their fan to turn off the sprinkler in front of them

What we want to do next is research assets and particle systems. We believe that the characteristics of the fire we are able to create will determine how the rest of the environment is styled. We will also do research on particle systems and how best to create the interactions between the wind from the fan and the fire. 

Updated: March 11

I was able to add a simple animation with hands attached to the first person controller. What this does is create a short animation of the hand going forward and opening the fist. I intend for the “spell” (orb of water) to be emitted from the hand at the peak of this animation.

I was able to use the hands from Oculus along with the SteamVR package to get these hands. I simply followed this tutorial for adding a simple close fist/open fist to the animated hand. I modified the script he wrote to work with a key to trigger the changes. I also added a simple script to animate the right hand to move out as the fist opens and closes.

This was quite challenging as I initially looked at using 3D animated models but found it hard to coordinate with the keys. I also found it difficult to get the camera position just right for these models, as the head or torso would sometimes come into view of the camera.

Here is a video of the hand animation in action:

Update: March 13

I have been working on the scene design of the fire planet. In order to create this, yesterday I used the terrain tool to create a rocky, hilly terrain of a planet. I also created a fog particle system in order to demonstrate constant smoke that is on the planet. I also created a dome with a few buildings in it that is located behind the player as they enter into the scene. This was done so that the user feels like they have to defend the city from the moving fires. The dome is shining white, while there is a red spotlight that emits from the base of the city onto the rest of the scene. I did this in order to add to the fiery environment and a sense of emergency as the user is required to put out the fires. Lastly, I created water sprinklers using this tutorial as well as fires pulled from the Unity Particle Pack.

A few things that I want to do with the scene:

  • Make the lighting around the dome and the city more realistic. I am not really sure how to do this without creating a design for a, live, dynamic city.
  • Fix the issue with the fog impacting the contours of the terrain viewed through the fire.

Today we hope to merge all of our scenes together and have the animation of the arms coordinate with the release of a water grenade. We also hope to add Will’s script to allow the detection of collisions between the water and fire particles in the scene.

Here is the video of the scene:

Update: March 15

Today we continued to work on tying all the parts together and finalizing the style and interactions. On Friday, we were able to successfully make the user put out the fires by shooting the particle orb out of the gloves of the on-screen character. On Saturday, we finally fleshed out our narrative and decided to stick with the original idea of a civilization living under a dome on a planet that is plagued by constant fires. We debated whether we would change the premise of the experience to protecting the civilization’s final tree from forest fires, but scrapped this in favor of the original idea. We discussed how we could make this narrative apparent in order to provide some context for the scene as well as motivation for the user to focus on the objectives. In order to do this we decided to record a voiceover, giving the player explicit instructions on what to do.

We also decided to style the orb/grenade similar to a magical ball, using a simple particle system made using Unity’s Visual Effects Graph. This allowed the effect to appear more magical, tying into the narrative’s description of the character as someone with “powers.”

Today we added a final script to see when the character enters a transparent cylinder that marks the final destination. Upon entering, a voiceover congratulating the user on their efforts plays and the sprinkler comes to life and shoots a stream of water away from the dome.

Reaction: Response As A Medium

Response acts as a medium to create a predictable and consistent relationship between humans and computers. Krueger suggests that this interaction is, in itself, a new medium of art, comprised of input data received through sensors, cameras, etc., rules designed by the artist which process this information, and an output reflecting these rules and the input. The many steps required to design and implement this relationship are what Krueger views as the medium, as artists in the field are forced to think about how people will engage with the piece. For Krueger, response as a medium is not focused on the output, or visual and auditory responses from the computer, he even argues that this might distract from the relationship between human and computer. He argues that it is rather this relationship which is the primary component of this medium.

The reason why this is a medium in itself is because of its ability to make viewers participating actors in pieces of art. Furthermore, the artist maintains some distance from the piece, providing the rules, systems and creative vision for its operation, but potentially “relinquishing total control” as the piece is experienced and used by people. This contrasts with previous mediums of art, as viewers become users, using their body to influence the piece, and artists are unable to assert their creative vision as strongly as they once did. 

Final Documentation: A Walk In The Forest

Project Folder

Project Description

A Walk in the Forest is an immersive, 360 experience created in Unity. The experience takes the user into a temperate forest on a cloudy day at sundown. The project is stationary, allowing the user to look around from a position at the bank of a lake, with two bodies of land connected by a wooden bridge across the lake. The experience also utilizes sound to make the environment more immersive, with sounds of light wind, water lapping against the shore and noises of birds and insects. A Walk in the Forest is an attempt to make a realistic forest environment in Unity, using the terrain tool and various assets to provide great detail to the user’s surroundings. 

Brainstorming 

The inspiration behind this project was the city that I grew up in, Baguio City, in the Philippines. The city is located nearly a mile above sea level and is characterized by its tall pines and cool weather. My affinity to forests served as the primary inspiration for this project. Another inspiration for the project came from my experiences in Europe during J-Term, walking in forested areas and parks in Germany. This served s inspiration for this project because I had not experienced a proper winter climate prior to this, and found the empty winter forests beautiful. 

Image result for baguio city pine forest
Pine trees in Baguio City

I have always appreciated how forests have made me feel, and I often found it relaxing to walk in the forest when I need to relax or reflect. Because of this, I was inspired to push myself to create a VR experience that was as close to a real-life experience as possible. I wanted the environment to be relaxing but also have a certain distance from the user, as if the forest had its own indifferent aura, with its ebbs and flows not being impacted by the viewer. I have always found this indifference of nature relaxing, as the environment would remain the same regardless of my current mood and thoughts. I have also always enjoyed being around rivers and lakes, with their constant movement and rhythm being similar to the indifference of nature I wrote about above.

I started off by creating a map of an aerial view of what I thought this could look like. I decided to add the bridge to make this environment more approachable, and later a bench placed close to the camera. These decisions were made in a deliberate attempt to make the environment more approachable, as if a part of a longer walk taken by the user in this experience. Below are also two sketches of that I thought the views would look like.

Process

With Sarah’s advice, I started familiarizing myself with the terrain tool to create the landscape. This was fairly easy to do, but the difficult part was making the terrain look realistic. I was not sure what paint textures to use to make it realistic, and was unsure about what combinations of dirt, gravel and grass would allow me to do this. The other issue was the trees I would use. I was able to find an asset pack that allowed me to edit trees and their number of branches, leaves, etc. but I found it difficult to achieve the effect of the barren, leafless forest I envisioned because the trees started to all look very similar to one another. I decided to change the trees to tall conifers instead. I also decided to place the camera at the bank of the river, and a trail leading uphill from where it is. This was deliberate to make it seem like the user had walked down the path to where they are standing by the bank of the river.

I used many assets apart from the trees, using rocks, tree stumps and fallen trees and a bench to make the environment as realistic as possible. I created the bridge through wooden planks using cube game objects.

Lastly, I wanted to make the scene slightly dim, to accomplish the effect of it taking place on a cloudy day. I used post processing to add ambient occlusion and color filters, which did allow me to accomplish the effect of a darker, cloudy day. I also used a particle system to add some light fog, and also found a skybox for a setting sun on a cloudy day.

I wanted to build the project to my Android phone for viewing in the Google Cardboard, but unfortunately, the scene was split in half when it would play on my phone. After trying to change the orientation in the build settings, nothing would change the split in the screen.

View split in the Google Cardboard

Screenshots from the scene:

Final Reflections:

Overall, I am happy with the project but I am still bothered by me wishing I could make the scene look more like a forest in real-life. For the following projects I want to look into how I can accomplish more realistic graphics using all the tools in Unity. I still have no idea what I will do for these projects but I realized how powerful Unity is and how much more I can do to the environments to make them truly immersive and engaging.

I really enjoyed this project because it made me think about how I arrange objects in scenes with purpose to contribute to an engaging experience. Although I still have a lot to learn, thinking about why I would place things in a location deeper than what my intuition would tell me was a challenge and a valuable experience.

Project 1 | Introspective Forest

My project was inspired by stark winter landscapes that I have experienced, in particular the contrast between a moving, rhythmic lake or river and the still, barren trees of a forested area I visited last month. I hope to create an environment that is largely devoid of horror, hope, mysticism, happiness, etc. but rather one that invites introspection, free of reflecting our own mental state into our perception of the environment. I am really interested in how we project our mental states onto our environment and perception of space and light, and want this to be as close to a genuine environment as possible. 

Below are two pictures that look similar to the scene. Although I realize the environment will look different from these pictures because I intend to use mostly low-poly assets, this is the closest I can imagine the scene to looking like in real life. 

I intend for the user to stand on the bank of a lake on a piece of land that is heavily forested. Across the body of water in front of the viewer, there is a small bridge. For my rough sketch, I drew the primary viewing environment, with the user facing the bridge connecting two sides of land. I intend for the bridge to be rustic and untouched. I hope that this will make the user wonder about the bridge’s use, why it is there, who uses it and whether or not they can use it. 

Map of the scene

Apart from this, I intend to play around with Unity’s environment effects, hoping to animate the water with ripples or small waves. I also want to add mist or fog, in order to re enforce the theme of loneliness in a way that invites self-reflection.

Two views of the environment

Update (February 14)

I was able to make a lake with the help of this YouTube tutorial that gives an in-depth tutorial of how Unity’s terrain tool can be used to create a lake: https://www.youtube.com/watch?v=HeRh24-QUoI

This tutorial was also helpful with getting started with terrain painting. As of now I believe I have a strong idea of what I want to do, but I am not sure how to aesthetically implement it. I am also struggling with using the camera, the scene editor and with positioning objects accordingly. I think it will take a lot of messing around with the environment and the assets that I bring into the scene to determine what will be ideal for my scene.

Updated (February 16)

I have continued working steadily on the project and I have made the terrain look more realistic by playing around with the paint texture tool, as well as with the lighting, fog and other objects to make the scene more similar to a secluded forest. An issue that I had was finding ways to make the forest look more like a harsh winter forest, as most of the assets available on the asset store seem to be more geared towards summer environments. Because of this, I have decided to stick to a secluded forest, but have decided to add more elements to the environment that might suggest comfort or the ability to be safe alone in nature. Some ideas that I have are adding a bench and a more visible walking trail.

A big problem that I have had was the trees I would use, because these would be what would make or break the experience of being in a forest. I was set on using leafless trees I was able to create using the tree generator in an asset pack I found, but found this to be too unrealistic and also hard to make each tree in the forest appear as a unique tree. I have switched to using trees in a conifer forest asset pack I found, and I find these to be much more effective in creating a realistic forest.

Lastly, I have also implemented a particle system that creates the illusion of fog. This was not too difficult to do following this tutorial to accomplish this, playing around with the lighting to achieve my desired effect of a light mist.

Previous leafless trees I used

Update: February 18

Today I added the finishing touches to the scene, adding some ambient sounds (rippling sounds and light wind), more assets to add variety to the environment as well as some post processing effects. I still found it really difficult to make the environment really realistic, even as I played with the painted textures on the terrain.

I am quite happy with the overall result, and I am now very comfortable using the terrain tool and with using manipulating assets within the scene. I spent some time trying to build the scene for the Cardboard, but I ran into the issue of the screen being cut in half within the viewer.

Instead, I found a script online allowing me to move the camera in 360 degrees and decided to build it for my Mac instead. This was quite a frustrating process because I tried messing around with the orientation settings but no matter what the build on my phone would remain split.

Reading Response: Hamlet on the Holodeck, Chapter 3

I think that it is most interesting to think about how VR will utilize the encyclopedic properties of digital environments to create engaging experiences. All of the knowledge present on the internet and networked devices can be explored in a setting that allows complete immersion for users, but how can virtual reality present this information through near full-sensory immersion? I believe that apart from adhering to physical and spatial properties of the real world, virtual reality experiences will have the difficulty of enhancing our experience of movement within the experience. I believe the challenge will be: how can the most engaging aspects of viewing and navigating the world around us be combined with innovative ways of movement and sensory stimulation in virtual reality to enhance the user’s experience? I believe VR will force developers and designers to alter the current state of “encyclopedic knowledge,” which I believe is at the present limited to video, photos and text on the internet, and how new and old knowledge will be transformed into appealing experiences in VR. 

I believe that Murray’s description of Sid Meier’s Civilization as a game that can inhibit our ability to interpret the underpinnings of alternate realities through the experience’s seeming encyclopedic knowledge will remain more or less true in virtual reality. In fact, I personally believe that in its present state, virtual reality may inhibit users’ ability to “ask why things work the way they do” because of the physical constraints that VR poses with its wonky headsets and controllers. The mere fact that I can explore the whole database of the rules and backstory of Civilization in mere seconds and with moving my mouse by a few inches is extremely efficient compared to the required physical, full-body movements in virtual reality. Therefore, I believe VR designers will need to consider how experiences can allow full, conscious participation of users while remaining engaging and requiring minimal mental and physical exertion to question and understand the environment around them.