Documentation-Let it out!

Inspired by my first project of creating a meditative space and the second project of observing users hijacking the intended goals of my project to throw trash as far as they could, I wanted to create a stress relief room that would explore users’ destructive tendencies in VR and also challenge myself to depart from the normally slow, poetic experiences I design. I had seen several videos of “Rage Rooms” where the players just started smashing plates and breaking computers. All for the price of $45/hour (or along those lines). The existence of such rooms seemed terribly wasteful and also inaccessible to the everyday person who could not afford to shell out so much money in the name of self-care. I also knew that I wanted to create a project that had a compelling need to be done using the medium of VR. Thus, I began thinking of interactions I could create in this room. I began with smashing plates, hitting a punching bag, and burning things in a fire.To create a sense of story, I thought of dividing one wall into three partitions-one for each interaction. As each interaction was done, one part of that single wall would fall onto the ground, revealing an open forest sky. The room itself was going to be a typical living room with a couch and TV and a shelf of china plates. I decided though that I wanted to have a more peaceful environment to promote stress relief, but I also didn’t want it to be soothing because I wanted players to feel motivated to break things apart. Thus, I was inspired by Japanese training dojos, I created an environment in the same spirit…meditative but still conducive to action-oriented interactions.


The interaction of smashing plates turned out to be very satisfying. Once I fixed performance issues of instantiating so many objects at once, I made the plates the focal piece of my experience by placing them in the center of the room near the player. I placed them on wooden tables to match with the ambience of the room, but if I’m being honest, I would have liked to design my own tables had I more time as I was pretty dissatisfied with the aesthetic of the tables in the asset store.

I had a fire in the middle of the room and when one threw a notebook in the fire, the notebook would dissolve. This was pretty dissatisfying and the fire looked weird, so I scrapped this almost immediately. An interaction that I felt sad destroying was the ball throwing. I really liked how if the player threw the ball in the box, a white line appeared, marking the site of a successful throw. I thought players would be excited to get the ball in from further away, but in comparison to the plate smashing, less satisfaction was derived from the ball interaction. Furthermore, the ball was just annoying to get inside the box and its bounciness differed too much from the rhythm of the plate smashing, so I had to scrap it, though I felt a bit sad doing so.

awkward fire!

I began with a sunset skybox that was slightly dark with stars in the sky. I intended it to be spiritual, but playtesting revealed that it was really just spooky. I loved the aesthetic of it, but did believe I needed to have a more relieving feeling outside the room. I ended up creating an open sky so that there would be a more openness juxtaposed with the constraints of the room.

the scrapped sunset

I found a cool hatchet in the asset store, so I decided to incorporate it into the experience. I thought it would be fun for the player to smash printers with it and break the printer into pieces. However, the breaking of the printers was into awkward clumps that I couldn’t seem to adjust no matter what in Blender. Thus, I decided to just destroy the printers and have an explosion, which I hoped would create a satisfying feeling.

Having a hatchet though added a specific affordance to the piece. In having access to a hatchet, the player would expect to be able to smash more things beyond the printer. Thus, I added the ability to destroy the walls…instead of having a partition for each interaction, I just split the walls evenly separated by columns. I tried to change the texture to crack the wall everytime it was hit with the ax to give it a stronger depth of experience and make the user feel more powerful, but it looked quite horrible as the crack was not at the site of collision. Thus, the wall was destroyed with each hit.

With the destroying of the walls, the sky looked quite open and I wanted to add something that contrasted with the interior of the room. Something that felt playful and nostalgic. I began with bubbles, but they felt too light. Clouds looked too bad. Then, I settled on balloons and ended up having bubbles blow out each time the balloon was popped just because I really like bubbles (I originally had a laser gun). I decided to replace the laser gun which felt too light with a bow and arrow because I remember how powerful I felt using the bow and arrow in the VR Maze experience during our class trip.

However, having balloons when the walls were opened up really contrasted with the remaining floor in the room and I also wanted to convey a sense of resolution to the player. I ended up switching the floor to white and removing the roof, creating a colorful marquis at the end and adding bright white particles that danced to the floor and changing the music. The combination of all this with the balloons conveyed a sense of nostalgic playfulness and magic.

I tried to make the logic of the space clear through the placement of objects. I tried to make the plates the first things the player saw so that they used their hands to pick them up and throw them. On the right, the hatchet was placed on a table, but the printers were on the floor so that the player knew not to try to pick them up. The longbows were placed on the ends of the room, one on each side so that the player could use them to pop the balloons.

In terms of music, the main background clip was an instrumental of Do I Wanna Know by Arctic Monkeys because it is quite pumped up and energetic but not overly intrusive. Listening to it makes me feel ready for anything. For the ending song, I chose a specific piano cover of Kygo’s Firestone because the song conveys a sense of happiness with a tinge of melancholia and is light in tone to contrast with the heaviness of the first song. I also tried to provide feedback to the user through sound. The shattering of the plates was accompanied by a glass shattering sound that I think contributed to the satisfaction of that interaction. Hitting the hatchet produced a thud sound. Popping the balloons also had a satisfying popping sound. Smashing the printers also produced an explosion sound, but to be honest, I wish I had used a shorter sound as the one I used was a bit too unnaturally long of an explosion for that specific collision.

Reflecting on the overall experience, I am proud of what I accomplished as a one person team. I really wanted to challenge myself to become more comfortable with Unity script, so I wanted to create interactions that didn’t necessarily rely on the SteamVR interactions (though I love the Throwable script. Makes life so much easier). I had a lot more scripting than my last project and feel quite comfortable now. I also really enjoyed hearing the positive user feedback at the showcase-no one noticed the bugs surprisingly. It was nice to hear a couple of people expressing how it was a stress relieving experience despite me not mentioning my objectives to them. Overall, I have a tendency to go off on tangents, which is great when trying new things creatively that improve your project or take it in a new direction, but it’s not so great when you divert too much off course that you end up breaking something. I broke my project so much I don’t know which branch is the real one. I also realized that because I don’t necessarily have clear objectives of what I want to accomplish, but rather just see what happens through the process, there’s never a finished state…there’s always more you can do.

There is definitely a lot of room for improvement. For one, I’d fix the buggy walls: the walls that just would not register a collision and break. I’d polish the design of the interior in terms of the objects in the room. I’d make the explosions smaller and closer to the site of collision. I’d improve the aesthetic of the instructions. I’d add the restart button. I’d fix the particles so they only started coming down during the ending.

Class Trip!

I felt strange walking around the VR Park as I used to go there with my cousins a lot when it was previously an arcade. The arcade had the exact same layout as the VR park: the roller coaster, the Burj Drop, the dune bashers attractions were nearly the same as in the arcade, but were now a completely different experience with the addition of VR. Thus, these VR experiences were designed for a currently existing physical experience. I remember riding the old roller coaster quite well: one is able to see the track and have the thrill of anticipating a drop, etc. However, there was a certain dimension(?) added with VR, a stronger narrative with a stronger role for the viewer. On the old roller coaster, the focus lay in the physical infrastructure whereas in VR the physical infrastructure is not the focus, but rather, it complements the VR narrative to give a stronger sense of storytelling.

My favorite experiences were the VR maze and the zombie shooting game. The VR maze was a really powerful individual experience that was well-engineered with a meaningful narrative. The frequent shooting of enemies with a bow and arrow as well as the collecting of the treasures created consistent levels of achievement to keep the viewer emotionally invested in the experience. The sound contributed to the ambience and also provided feedback for the viewer whenever they successfully shot an enemy. The four person zombie shooting game was also incredible for the immersive experience it provided. The sound contributed to this immersion as it made the zombies feel closer and more pressing. Additionally, having the headset and being able to hear the other team members made you feel part of a team and made it that much more important to protect the base from the zombies. One interesting aspect of the zombie game was that it sort of broke the “fourth wall” or whatever the VR equivalent is. As the game mode switched from training to the introduction to the actual game, you could see the grids and the VR system booting the new mode, making you aware that this was an artificial experience. Despite that, the game portion was still immersive.

It was also interesting to see the representation of the roles we took on in this VR world. In some of the rides, like Dune Bashers or Burj Drop, I appeared to be a white man which introduced a disconnect towards my role in the experience. In most of the attractions, however, no part of my body could be seen which actually made me feel more immersed in the experience. Another challenge of having a VR experience at such a large scale is the potential for technical difficulties. For instance, when I went on Dune Bashers, there seemed to be an error so that my VR experience conveyed that I was stationary in a garage but the physical experience was dune bashing. That disconnect between the VR and the physical narrative created such nausea in me and made me feel as if it were an artificial experience.

Development Blog: Let It Out!

I’ve had a few ideas for my final project. Here are some of the things I’ve been bouncing around:

  1. A tycoon style cake shop game where the player has certain goals to make per day. This is largely inspired by my love for these computer games like Diner Dash or Hot Dog Bush as purposeless as they might be ultimately. I have ultimately decided not to go in this direction as I would want to put a lot of effort into character development, into creating an alternate universe with meaning and I don’t have the tech capability in Maya yet. Additionally, the rhythm of the experience would be particularly important to me and I think I’d need at least two months of user testing to get that perfect balance of speed and seeing what movements/baking processes need to be simplified while still retaining the meaning of baking a cake.
  2. Some sort of inceptiony VR experience in which the viewer builds their own VR experience within the environment rather than needing access to software like Unity to create VR. They are then able to view what they built. It’s especially the part of wearing a headset while already wearing a headset that leaves me unsure of how to proceed.
  3. A sort of whack-a-mole in a beautiful, mountainous alternate reality where post it notes keep appearing as an interface not leaving until you’ve clicked them to convey the never ending to do list. But I want the viewer to have a way to break the cycle and I can’t really think of a creative way that makes the most of VR to do so.

After reaching a dead end in all my project thoughts, I decided to revisit my previous projects to understand my biggest takeaways from them and to see what input they could provide to my final project.

From my first project, I really liked the concept of using VR in meditation given what an immersive space it is. I have found myself using the world I created and playing a guided meditation in the background as I get easily distracted in the physical world. My second project made me realize how important user testing is, but also how users tend to gravitate towards the destructive: throwing things in the wrong places, teleporting into colliders, etc.

Can I confess? Despite all our readings and looking at cool VR projects, I still am not entirely sold on the necessity of VR. I understand its incredible applications for museums, medical training, marketing, etc. However, I get so irritated when I see a project that has been made for VR for the sole purpose of it being made in VR. In other words, I think a lot of VR out there doesn’t maximize the potential of VR and hasn’t been designed for VR. It’s simply a game from the physical world adapted for a VR headset. Thus, for my final project, I want to create something that has a clear advantage and purpose for being designed in VR. This, combined with my insights from my earlier class projects, have made me realize that I want to create a therapeutic project like my first that is in line with the theme of “Renewal.” However, I do want to try something that is different than a guided meditation that also explores the user’s destructive tendencies. Sometimes when you are in need for something cleansing, you don’t want to meditate-you want to break things! This also has the added benefit of being something that maximizes VR. First, many of us just want to shatter 30 plates, but very few of us can afford that or want to deal with the aftermath of cleaning our mess up. Second, this is an experience that would benefit from an immersive space like VR because one could look around the room for objects to break and has an immediacy that is conducive to the need for feedback when we have strong emotions.

One of the key interactions I want to have is shattering objects. I shall begin with the quintessential plate! I’ve watched a few youtube tutorials and it seems that I must adjust the mesh in Blender. Today, I spent a couple of hours just figuring out the horrendous Blender interface and I finally figured out how to fracture an object’s mesh. I got a simple plate fbx model from turbosquid and created a particle system for it. Using this particle system, I created fractured objects for the plate. So, thus, it is a total of 100 shards! I imported the model into Unity.

I’ve added the Throwable and Interactable SteamVR scripts to the parent object which I have made the unbroken plate. However, when the plate collides with something, it should shatter into the 100 shards.

  foreach (Rigidbody rigid in brokenObj.GetComponentsInChildren<Rigidbody>())
            rigid.AddExplosionForce(force, transform.position, radius);
    }

I hope to loop through each rigidbody that is a child of the parent plate (thus each shard) and use the Unity function AddExplosionForce as it seems to give the feel that I want to the shattering.

I’ve been working on this for nearly an hour with very little progress, so I think I’ll come back to the script when I’m in a fresher state of mind. For now, I have implemented the teleport system. This time, I’ve learned from past mistakes and I think I’ve coordinated the floor and space such that the teleport is more natural. I’m not sure if I will keep the teleport. When I sort of mentally mapped out where each interaction would be, it felt a bit tight so having a teleport system would give extra flexibility. I just added it because I’m not sure how much space I’ll need as the interactions evolve and also not sure what exactly our space will be in terms of dimensions at the IM showcase.

For the environment, I’m debating a few options. One, a retro looking living room. Two, an office space with cubicles, etc. Three, a dojo that is meditative but also fight-conducive.

the dojo environment in progress

I debated changing my idea because I’m not sure if I can get the right rhythm needed for this experience. Perhaps destruction is not my forte. I kept daydreaming about making a VR experience that was underwater and having fish blow giant bubbles that the user could pop.

But, I shall keep going at this a bit longer.

For the shattering objects script, I’ve tried many different approaches. The difficulty lies in the way that my blender model is created and that specific object hierarchy. I tried to take the unbroken plate so that when it collides with the floor or wall, the unbroken plate layer is removed and so the shards remain. I wanted to add an explosion force, so I need to loop through all the shards and add a rigidbody and then use that rigidbody for the Unity method AddExplosionForce. If the shards are there from the beginning as they are part of the object, I shall have to keep them affixed to the unbroken layer. I played around with doing a Fixed Joint on them but it keeps shaking–think it has something to do with the box colliders. When I check OnTrigger, the shaking goes away at least.

using Fixed Joints to keep the shards attached.

I think it will be better to unpack the object and separate the shards from the unbroken layer. When the unbroken layer collides with the wall or floor, the unbroken part will be destroyed and the shards will spawn.

This approach is working so far, but am having trouble with where the objects are spawning. At first, I did them at the transform.position of the unbroken object, but they would spawn under the ground. Next, I am respawning them at the initial contact point, but they float above the ground when I do this (I have frozen their y position because they initiate below the ground then).

shards floating above ground! aaaaahhh

I’ve added a public offset variable and set it to -3 for now. But the shards just float aimlessly after removing the y constraint on each individual shard.

It’s kinda trippy though?

I’ve kept the offset and removed the y frozen constraints on the individual shards, but frozen the y on the parent object. This seems to be working for now.

Freezing y position on parent object

Some initial bugs: if you throw the plate like a frisbee, the shards are spawned in an unnatural circle shape. If you don’t hit the plate hard enough, it doesn’t break, but I think that’s okay honestly. If the plate is thrown in such a way it collides twice, it will spawn the shards twice, but this is also no big deal as it’s just more fragments and it seems to be rare. Just the first bug is something to work on.

circular shards

To address this, I’ve changed the transform position of the individual shards so it looks more like a pile. But, when I throw it, many of the shards spawn under the floor so only a few shards are actually created visibly on the ground. I will table this for now and address this later as I want to create my other interactions.

I also want to add some sound in. Because I am destroying the object, I cannot add an Audio Source to the plate and expect to get the audio source component and play the clip. I played around with creating an empty game object, but I’m a bit too lazy to get everything to line up correctly. I chanced upon a Unity method called PlayAtSpot which essentially instantiates an Audio Source and destroys it after it plays the clip. I used the contact.point to instantiate the Audio Source. I am not sure how expensive it is to create and destroy an audio source every time.

 AudioSource.PlayClipAtPoint(shatterSound, contact.point);

I got a friend to test the plate shattering and they ended up throwing it at the ceiling which I hadn’t thought about. I added the collider tag so that it will also shatter if it hits the ceiling. The reason why I need collider tags is that it would otherwise start the script with any collision like the hand or the cabinet it starts on.

After seeing how much people enjoyed throwing objects around in the recycling project, I thought I’d create some bins for people to try to aim the balls into. If they hit the box, the ball disappears and a gray line appears where the player hit the ball from to see if they can get it further and further, like a game with themselves.

the three boxes

I found the boxes from a medieval containers pack. I added a bouncy physics material to the ball, but I find that it is too bouncy and it just goes on forever. I better add some friction and reduce the bounciness. For now, a dynamic friction of .3 and a bounciness of .9 is working well. For the interaction, I will add a script to the ball so that if it collides with a collision object with the tag “box”, the ball will disappear and a cube will appear where the player was. Okay, it’s a technically a cube object, but more of a white line. I wasn’t sure which object of the player prefab to reference for the transform.position, so ended up doing the bodycollider which seems okay for now.

I’m also having trouble with the inside walls, but think this has to do with the normals, so that Unity doesn’t waste resources rendering the inside of buildings. I ended up just flipping the walls that weren’t showing up.

inside walls not appearing?

I added a hatchet so people can smash things! If you pick it up a certain way, it really does feel like you are holding a hammer. However, if you pick it up from another side, it feels very unnatural ofc. I suppose just like in real life, people will put it down and pick it up from another side that feels more natural and functional for the task they want to do. I wonder if there’s a way in unity though to only for a certain part to be picked up from. Perhaps I can see if the prefab can be broken up into the handle and blade part.

When I only have the throwable script on the body, the handle and blade separate. I will ask people in the class on Monday if they have any ideas.

I’m not sure what to do with the walls. I wanted to make them crack if you hit them with the hatchet, so tried playing around with the texture of the plaster to add cracks. But, I don’t think it will work out because the cracks need to appear from the point where it’s hit and I have no idea how I could do that. I’m debating whether they should glow or shake or some sort of feedback, so you hit it three to seven times before it falls.

Also debating what background sound to put in the room. I started watching videos of real “rage rooms” to get ideas and they seem terribly wasteful as people smash plates, etc.

My first thought was to do a song like the battle music in those superhero movies. Something energetic and pumped up, but not necessarily angry. Then, started thinking about Kanye West’s song Power. But, it might be too intense for the environment I have. Then, I considered a couple of Arctic Monkeys song instrumentals.

For now, I shall do the instrumental to the song Laughter Lines by Bastille. The choice of music has to be perfect! I asked a friend to try out the environment with this music and he liked it.

After the feedback from in-class playtesting, I realize I lack a sense of cohesion. Not that cohesion is a requirement for a sense of story, but I think it helps create a meaningful experience. I lack cohesion in the sense of the power involved in the various movements of the player and the feeling that I want to evoke in the reader. For instance, I have the player throwing a ball and it is bouncy and also very hard to get into the box. The player would likely be discouraged from continuing on with throwing the ball, which as we learned in Games and Play last summer, it is very important to keep your player motivated through small successes indicated by feedback. Additionally, the feel of the ball’s bounciness is light and playful which is too much of a juxtaposition with the smashing of the plates. I’m pretty satisfied with the plate smashing though I need to figure out how to not let the amount of new objects being instantiated (the plate fragments) affect the performance. Also, I got negative feedback on the music, so I shall switch to Arctic Monkeys again! I also got feedback that the outside was quite creepy.

After the playtesting session, I realize I need to ask myself: how do I want the player to feel during the experience? I think I was so obsessed with the idea of relieving stress that I didn’t really get into the specifics. I want the user to feel powerful, like they can achieve anything. I want there to be a sense of relief of course in terms of the satisfaction derived from the interactions. I also want the user to feel a sense of resolution at the end.

Getting at the core of this, I think I will remove the ball throwing. I’m not really sure what’s going on myself with it…there’s just too little incentive to enter that interaction. It is not an easy decision because I really like the concept of a line appearing at the player’s position if they successfully got the ball into the box.

I also will make the walls get destroyed each time the hatchet collides with the wall. Before, I had a count of 3, so the player had to hit it 3 times before it would collide, but that was rather annoying.

To replace the ball interaction, I want to break printers with the ax as well. I created 3 new meshes (each mesh a different level of destruction) for the printer in Blender for it to shatter. Each time the ax collides with the printer, it instantiates the mesh for the next level of destruction.

Trying this out, it’s not the cleanest instantiation and there’s not much satisfaction. I think I will just make the printers explode. Essentially, I am writing a script that will instantiate a particle prefab. I am having trouble with making the explosion occur at the exact site of collision so I created a ContactPoint object. I’ve also reduced the radius of the particle system so it’s a more contained explosion and added an explosion sound.

I can’t seem to get the walls to work correctly…some just won’t be destroyed. This seems to correlate with whether the body has also collided with the wall at some point. Thus, I’ve changed the function to be OnCollisionStay rather than OnCollisionEnter, so that it is checking for the right collider tag constantly. I’ve also checked and adjusted each box collider individually. This doesn’t seem to work either. I have decided to also add all the parts of the body, that is the Player prefab, as a tag ‘Player’ that is one of the tags checked during collision. This isn’t working either.

I’ve added a script to respawn the hatchet on the table if it falls below the y-position of the floor. Also, added fade in to the camera and the instruction UI.

During user testing, the sunset I had was deemed to be sort of spooky. I want to create an open sky and a forest outside.

It doesn’t exactly match the feel of the room. Though I am trying to create a clear difference, this is just too different. I also want something for the player to interact with outside the room but not go out of the room.

I began with bubbles, thinking the user could pop the bubbles with a raycast. The bubbles were too light. The raycast was awkward with the teleport and picking up objects. Then, I thought I should confine the raycast to a container object, like a laser gun, so that the raycast would be active only when that object is picked up. I found a tutorial called ‘Let’s Try Shooter’ on Unity that I used to start creating the laser gun system. But, I thought the laser gun had too light of a feeling.

I remember how powerful the feeling derived from the VR maze game was. The bow and arrow was a very satisfying interaction. I think for my game I will have the bow and arrow instead of the laser gun. I am just using the longbow from the SteamVR system.

Okay but what will people shoot with the bow and arrow? I liked the idea of having floating islands in the sky but not sure what interaction to create with the islands and I cannot find any suitable assets for the islands. I thought of creating a cloud system, but it doesn’t have the desired effect and is kind of messy looking. I started thinking of things that evoke nostalgia which is usually a sort of relieving feeling.

I started getting fixed on this vision of a carnival with a colorful marquis and balloons. I decided to go off on this tangent and started making a sea of colorful balloons in the open sky around the room. The bow and arrow would be used to pop the balloons, though I wonder whether balloons are heavy enough to make the user feel powerful.

I really like bubbles and find them so relieving, so I decided to still incorporate them into the experience. They’d come out of the balloon being popped. I really like the popping sound of the balloons.

I created alternate shaders to give the balloons different colors and a certain reflection when the light hits.

I started creating a marquis by scratch because I couldn’t find any suitable assets. I wanted to make ribbons that the player could wrap around the columns like a maypole. But, given the constraints of my knowledge of Blender and the time I have left, I am just going to make a marquis from rectangles.

It actually doesn’t look that bad. It’s very colorful but I want the user to look up at it. I think I’ll create a particle system of lights dancing down.

I can’t figure out how to get the music to change!!!! I picked this specific piano cover of Kygo’s Firestone song because it has a tingle of nostalgia and is generally positive sounding without being overly upbeat…a sort of perfect resolution song. I ended up creating the script to play the audio source and stop it. I used the PlayClipAtPoint to create a second audio source to play the new clip for the ending which is triggered by all the walls being destroyed.

Visualizing Data in VR

During Bret Victor’s talk, I loved learning about William Playfair and how he invented the bar chart and other graphical methods to represent data. Related to these methods are “explorable explanations,” abstract representations that show how a system works or a way for authors to see what they are authoring without the black box of code.

Data visualizations are a powerful representation that is suited for VR. Though there are some visualizations that have been developed in VR, they usually rely on the game engine to navigate between charts or they will have some irrelevant motion like the bars rising in a bar graph when it is first loaded. I think with VR we can do more to incorporate the different modes of understanding that Bret Victor mentioned. For instance, we can build upon our spatial understanding to understand quantities, time, associations between nodes of information, or even how the charts are organized (like a library of books). We can build upon our aural understanding through having audio explaining the data and walking the user through it at a level specific to the user’s experience.

VR can make an data visualization an interface to information that makes the data accessible and easy to understand through abstraction. However, there is also potential for it to unpack the layers of abstraction and show how the data visualization has been made or even give the context behind the data. For instance, if there is a chart showing the amount of snowfall, could the user be immersed in the environment showing the snowfall and the data visualization of its levels? Data visualizations are a person’s stories of that data, so they are already created in mind with a specific objective for their audience. The trouble with these visualizations is that they tend to dehumanize the context behind that data, so VR really has the ability to use its potential for immersion to help the audience better understand the story. However, it is important for VR to not exploit this potential and to falsify the data through creating a specific immersive experience that causes a different perception of that data. I also think using VR to visualize data relates to the dynamic models that Bret Victor discusses at the end of his talk. Imagine data being updated in real time and seeing how the representation changes: the bar increasing, a point on a line graph being added, etc.

Development Blog: Project 2-Simran (and Yufei, Ju hee, and Lauren)

For our project, we want to create an environment that relates to sustainability on campus. If someone passes by trash without picking it up, we wanted to challenge what happens in the “real world” where there are no consequences. In our alternate reality, we hope to have negative feedback so that the user/recipient is transformed, translating into different actions/reactions in the real world. We hope to use a Gazelle article written about recycling on campus to inform our interaction design.

Some initial questions: how campus focused should it be? Should we create an environment that is a realistic representation of campus? Do we make the campus environment more abstract? When designing virtual reality experiences, how do we provide feedback to the user when they have reached the edges of our world? How should the trash respond when someone walks past? Do they rise and float in the user’s field of view? Is there some sort of angry sound that increases with time? What feedback is provided if the user puts the trash in the wrong compartment (plastic vs paper vs general like the campus receptacles)?

Our group’s storyboard sketched by the talented Lauren!

From this initial concept, we decided to just start building the piece in Unity to see what we are capable of accomplishing in a relatively short amount of time. We split up the work: Lauren and I will do the interactions and Ju hee and Yufei will build the environment.

After the first weekend, we had an environment built with a skybox and some objects. As a team, we’ve decided to change directions in terms of the environment…we want to build an abstract version of the campus. This will delay things as we only have a week left and the environment will take at least three days to build, but I think it’ll be worth it in the long run. I’d rather have less complex interactions and a more meaningful environment at the end of the day. Since Lauren has a very strong vision of what she wants the environment to look like, we shall separate the tasks differently. She and Ju hee will do the environment and Yufei and I will try to implement the interactions.

Here is the lovely environment that Lauren and Ju hee have built. It looks very much like campus! Yufei and I have just integrated the Vive and SteamVR system into the environment and are looking around the space. I wish we would have integrated it earlier as there are a few scale issues and the space is very very large, things that can only be seen through the headset. We shall have to implement a teleport system and rescale objects.

Yufei is working on the teleport system. SteamVR 2.0 makes it quite simple to add teleport! We just needed to add the ‘teleporting’ prefab and the teleport points. One thing we are struggling with is the level of the teleport system. It needs to be at player arm level and we’ve tried various combinations of levels of ground, player, and teleport points, but when we make it the same level, the player teleports lower for some reason. For now, we shall place the teleport points slightly above.

Yufei made the system into a teleport area rather than points. The arc distance of the raycast seems to be something we need to play around with to match a comfortable level for the player’s arms. For now we have made it 10 which makes it easy to teleport, but difficult to teleport to a close location.

We have spent a lot of time setting up our base stations unfortunately. Additionally, whenever we look at the environment through the headset and move our heads, the buildings seem to flicker in and out and sometimes disappear completely. A forum search reveals that we need to adjust the clipping plane which apparently means the region of interest that is the visible scene. We have adjusted the near and far parameters to 2 and 2000 respectively and that seems to work just fine! Additionally, the textures on the grass and floor seem very pixelly and stretched out so I’ve increased the tiling on their shaders.

tiling of campus ground

Time to implement interactions on the trash! I’ve added the Throwable and Interactable scripts to all the objects. For now, there are cereal boxes, toilet paper rolls, wine bottles, cans from food, and water bottles. Yufei and I decided to delete the toilet paper rolls as why would one throw those away and delete the wine bottles as they have liquid in them and one cannot recycle glass on campus except in a few places. We also deleted the cans as one can only recycle metal in the dorms and we wanted it to feel like waste disposal when walking around campus. We did add a chip bag as we wanted something to go into the waste bin rather than one of the recycling bins.

Speaking of the bins, I’ve added labels to them. At first I used the UI text, but that made it be seen through all objects and it was quite blurry. To rectify the blurriness, I increased the font size substantially so that it was now bigger than the character size and I reduced the scale of the text object to the size that I wanted the text. Another forum search revealed that because it was the UI text, it could be seen through everything, so I have changed to the textmeshpro Text and the problems seem to be fixed.

fixing blurry text

I am testing the objects to see if they can be picked up but they are quite far from the player since they are on the ground. Yufei and I are continually playing around with the ground, teleport, and player levels to find something that works, but nothing seems to. We’ve tried putting the ground and player at 0 like it says to do online, but when we also add the teleport in, the teleport level seems to change a lot. I shall make the objects float for now as we are running out of time.

I am struggling with finding the right attachment settings on the scripts. Additionally, our binding UI does not work, so we seem stuck with the default binding of having an object picked up with the inner button on the controller. The object still doesn’t seem to be picking up.

I don’t know what I’ve done differently, but I can pick up one of the objects now. However, it doesn’t seem to stay in my hand so I’ve only nudged it. It acts like a projectile so it takes whatever velocity my hand gives it in the direction of the nudge. Not good!

Yufei has been working on the objects and says that we need the objects to have gravity for us to be able to throw them. She has also found the sound files. The problem is should we just place the trash on tables. I’ll play around with it and see how it looks…after all, it can just be like the D2 tables I guess. It doesn’t look that bad honestly, but Max comes to save the day by helping us find the right combination of ground, teleport, and player. Also, it seems like our room setup was incorrect which is why it was difficult to reach the floor. Either way, the system seems to work a lot better now and I also feel less nauseous now when testing since it feels more natural. I still need to find an arc distance that works. 7 seems best for now. I have also kept the tables as they are reminiscent of D2, but moved the objects so that they are scattered on the floor.

For the bins’ interaction, I planned to add tags to the objects and the bin. If the tags matched, if they were both ‘plastic,’ then it would be correct. I added the test for this condition inside the loop of the target hit effect script on all the objects, not realizing that the loop just checks for the target collider, not the possibility of the non-target collider. I modified the script to add two public wrong colliders for the other two bins. If it hits the target, I want the correct sound to play and the object to be destroyed upon collision with the bin. If it hits the wrong one, the incorrect sound should play and a message should pop up saying: “This object should be placed in the bin marked: “ + the tag of the object. Thus, for the chip bag, water bottle, and cereal box, their target collider is the bin they should be placed in and the wrong colliders are the two remaining bins.

adding ‘wrong colliders’
Settings for target hit effect script

I’ve added two audio sources for the incorrect and correct sounds. However, I keep getting an error in the debug log about there being more than 2 audio listeners. The culprit is strangely the player prefab? It has one attached as expected to the VR camera but it has another one which is strange because it’s a prefab and a Unity scene can only have one audio listener. I delete the one not on the camera and hope for the best. Now my sound works!

Now, the sounds work, but the destroy on collision and the message does not. There is a boolean variable for the destroy already on the script but it doesn’t seem to be working. Perhaps since I modified it? I just make my own destroy method in the script and the problem seems to be resolved. I also need to adjust the level of the bins to be more natural.

I’ve also fixed the cat animation so that it goes from idle A to idle B to walk.

The environment still feels quite big. After you pick up an object, it’s such a chore to walk all the way to the bin on the other side. I’m going to play around with the position of everything in the environment to make it easier to move around. Additionally, I want to make it obvious that one should pick up the trash by featuring the bins quite prominently in the scene, as I don’t want to ruin the feel of the piece with instructions.

It’s now 1 am Sunday night, so I’m off to bed. But hopefully someone in my group or I can work on this Monday morning to implement the nice to haves:

  • if you place something in the wrong bin, a message could pop up saying which bin to put it in, so it’s more instructional in nature.
  • Having the trash have an emission or make a sound if you are within a certain distance from them
  • Having some sort of reaction if you pass this distance without picking it up
  • Having more trash and more complex trash

I am working on the message now, but seem to have issues with positioning the UI and not concatenating the tag to the message. If I can’t get it before class, I shall simply delete it.

An Interaction I Like: Black Mirror Bandersnatch (but more of a ramble in all honesty)

This was the first interactive film that I know of, designed to be streamed alone or in a small group of people; such an experience could not have the same effect watched in a theater in a big crowd as you wouldn’t feel complicit in the choices you made with the presence of other people. The film is reminiscent of a hypertext novel, in which your decisions fork various paths. There are evidently one trillion different ways your individual film experience can go.


There isn’t too much time in between making a decision that it becomes a typical movie, but it isn’t short enough that there is no room for storytelling through the medium of film. In other ways, the speed of interaction feels just right. There is also a progression to the interactions…with each subsequent decision being more high-stakes. The player is eased into the decisions, which coupled with having enough time for the arc of character development, creates more emotional investment into the outcome they receive. If they reach a dead end, the viewer can go back in time as the whole story is about multiple threads of time and whether one is more real than the other. However, the story becomes altered just as they do. Being able to go back to the last point before it all went wrong furthers emotional investment in the interaction as the viewer becomes struck with the desire to see many, if not all, possible endings–there are evidently five possible main endings for the film, so that is a good number that it is possible to see all five in about two hours, the average length of a feature film. Each decision that the viewer makes has only two possible immediate choices, making it relatively simple for the viewer comprehend what is required for them to interact. If the film was convoluted with the ability to make a decision from even three or four or eight choices, the viewer would probably get exasperated and stop watching or make a random decision reducing the emotional investment in the decisions made.


What I loved most was that the film was really about the appearance of interaction. Though the viewer was empowered to make decisions in the course of the film, they come to the conclusion that they only have the appearance of free will, a realization whose development parallels that of the protagonist. The protagonist similarly realizes that his decisions are being controlled by none other than the viewer. Perhaps, appearance of interaction is phrasing it wrong. Because it is certainly interaction given the media’s various reactions to the choice the viewer makes. Rather, it is that your interaction appears to have consequences but you soon realize the futility of making decisions. The film also raises the theme about whether it is only through insanity that one can realize their true creative potential. From the first decision of what cereal to eat versus one of the last decisions of whether to kill your father or not, Bandersnatch is an engaging interaction. It received much criticism from Black Mirror fans and others who declared that the endings weren’t personalized enough. That is probably well-warranted if you look at it from a critical lens as a film, but as a medium, it raised so many questions about how an interaction is designed to give you the appearance of free will…with other forms of media, the artist usually has an intended interaction for the viewer; there is a specificity to the interaction. It also makes the viewer wonder whether this will change the way that we watch on our screens and the dangers that may come from that in terms of what data is collected from us…when we interact with something, is our interaction a form of input? What could be gleaned by it? And by who?

VR Title Review: InMind


My first Google Cardboard experience was InMind, a VR experience designed to give the viewer an inside look at the brain. I selected this title because I was really excited to have a VR experience in the context of the human body, to have greater understanding into what our brains look like. What I found was something else entirely.


The experience began with an introduction in which the viewer is referred to as “Human.” The only other character in this experience was a robot narrator who speaks in a very patronizing manner to the viewer. The robot frames the narrative as “we are going to look into the brain of a patient who has depression” and the viewer is launched into what is evidently a game in which they must focus on the red neurons of the brain and turn them back to “normal” to “help” the patient. The ridiculous oversimplification of mental health aside, I do not feel that InMind achieves its goal of giving the viewer an immersive experience into the human brain.


First, the interaction is too slow. Feedback is given in the form of a circle becoming fully shaded when you concentrate on it. However, it takes too long for the circle to become full and thus, for the red neuron to change. It brought to mind what Chris Crawford says on interaction in his book “Interactive Storytelling,” that it must have speed. Furthermore, the interaction was slow in the sense that once you changed a few neurons, nothing seemed to happen. I grew bored. There seemed to be no progression in the narrative and when there finally was, it was merely a sentence or two from the robot who gave a feeble “keep going” message. Because there was such a focus on the red neurons, a pointless focus, I don’t think the viewer was necessarily observing the whole brain and all its synapses of activity. Sure, the environment was pretty, but it didn’t feel immersive. Perhaps that is because the game did not take full advantage of designing for VR since the rollercoaster through the brain only moved forward, not giving the viewer the chance to explore. Furthermore, I felt detached as a player. Perhaps, the game would have felt more immersive if there was more effort put into personalizing it. For instance, instead of being called “Human,” you could be called by your real name or a name you created.


Though the whole experience lasted only four minutes, it felt much longer. In this VR mind, I was bored out of my mind. The ending was just as dull as the progression: “congrats Human on not dying. Now download these other apps,” or something along those lines. I wouldn’t say that this was a waste of time though because it made me realize how important it is to have progression through a VR experience, a clear narrative if you will, and that there is attention given to how much feedback given to the viewer in this narrative and how fast the feedback takes.

VR app: InMind


The Dream Forest: documentation

It is very difficult for me to focus, especially when I’m trying to sleep. Listening to music certainly helps block out darting thoughts, but I wanted to create a visual space that I could focus on before going to bed. Thus, I wanted this space to be peaceful, minimalist, and beautiful.

Inspired by the place I grew up in, I wanted to create some sort of dream forest that felt very natural even if it had some mystical elements. To me, a forest environment conveys solitude, peace, and has no extraneous elements that could be a distraction, helping the viewer to be more immersed in the environment. One of my favorite things is looking up at trees and seeing the criss-crossed layers of branches against the sky. Thus, I was particularly excited to create a forest in VR because the viewer would be able to look across the forest, but also up at the tree branches. Thus, I began by creating the forest using a mixture of free tree assets. I ended up removing the leaves of the trees because though they contributed to a feeling of peace, the leaves went against the meditative aesthetic I was trying to convey. Perhaps because the leaves prevented the extent to which the viewer could see in every direction which I felt was a crucial element to creating a sense of reflection. The most challenging part of creating the forest was determining the optimal density of the trees. Too little and the environment felt unnatural and bare. Too much and the viewer could not see into the distance. Something I didn’t take into account was how much space the viewer needed, so I originally placed the player camera in the center of the forest, not changing anything for the viewer. This made the environment feel chaotic and cluttered, which was the opposite of what I aimed for. I ended up creating a clearing in the trees in the space around the player camera so that the environment felt more personalized to the viewer and that they’d have more room to breathe.

Once the basic form was created, I could focus on the little details that would create the identity of dream-like peace. I began with changing the skybox to put in a night sky. I felt that a darker environment would be more dream-like and conducive to using the environment before sleeping. However, the dark skies with the barren trees gave the ambience of something dark and sinister rather than calming and beautiful. Thus, I knew I needed to add some elements that would make it dream-like, conveying the sense of being in an alternate reality rather than just any forest at night. I added blue fog which added a tinge of magic, but also aerial perspective for the trees in the distance. I added a moonray which gave the forest a white glow that made it feel more peaceful. I played around with several elements like a pond, mushrooms, mist, swaying flowers, flying birds, but ended up choosing floating orbs of light and a gentle wave. I wanted something with soft, regular motion like breathing or rocking a baby to sleep. I chose to create a wave that flowed through the entire forest because of the sense of peace it gave me and the supernatural ambience it added. I played around with the wave so that it would barely be there and then appear when fading in. For the orbs of light, I created a particle system and adjusted its properties so that the orbs would be a rose gold color to balance the cool tones of the forest. Additionally, I changed the size of the particles and the radius of the system so that the orbs would float up from the whole forest which the viewer could see if they looked upwards. I wanted the orbs of light to balance the darkness of the forest and to be something calming that also invited a sense of awe. Finally, I added soft music with the sounds of waves to reiterate the peacefulness of the environment.

I am quite happy with the results, though a bit disappointed that I couldn’t get it to work with the Google Cardboard. For some reason, every time I would build the project with the Google VR Player prefab, my computer would crash and Unity would automatically quit. I did get it to build successfully once, but in the build version I couldn’t seem to move which was surprising because it worked perfectly fine when running it in Unity. But, I suppose this is okay as I have plenty of time to figure out how to make it work. I actually pulled the environment open once last night when I was feeling stressed and it did calm me down a bit, though that could merely be a placebo effect as a result of my bias towards my personal environment. It would be nice to play test it and see how others respond to the environment and adjust my design from there. One thing I want to play around with is creating a script that changes the skybox depending on the time of the day for the viewer. This is something I want to play around with rather than definitively do because I’m not sure how the barren trees would look during the day. Overall, I’m happy that I got better at ambient lighting and creating particle systems in Unity which will very helpful for future projects. One thing I learned through this project is how much these little details contribute to the identity of the environment. I originally intended to create a dark forest like the Forbidden Forest in Harry Potter and ended up with a dream-like identity just through a few simple elements.

Link to build: https://drive.google.com/open?id=1_yssadg1_JHMBkA9gciECKkH8AFv1Ikc

Link to project folder: https://drive.google.com/open?id=1aKHo-y6Jyy-8rRRMqRCYTvH2BMCCXRO5

Link to class presentation slides: https://docs.google.com/presentation/d/1iZphCYgfIraWx_qzIPrnPdoGto7hJEd31g_X83OTU0w/edit?usp=sharing

Development Blog: Project 1

For the first project, I bounced around two ideas. The first was a beach environment abstracted such that when you moved around in a circle from your vantage point you would see a change in the beach as a result of human activity, a gradual transformation. The second idea was a mystical forest, inspired by the Forbidden Forest. Perhaps because I watched Harry Potter over the weekend, I leaned towards the latter.

The Forbidden Forest in Harry Potter 2
The Forbidden Forest in Harry Potter 2

Though I do have a bit of experience with Unity, I don’t feel very comfortable with lighting, fog, or aerial perspective. Since these are critical to creating mood and a sense of place, I hoped to specifically work on these aspects while making this project.

So far I’ve found some assets I really like from Fantasy Forest and I am experimenting with the color of the fog as I think that could make it more of an alternate reality by giving it a tinge of magic. I struggled a lot with making the trees have the scale that I wanted, but I realized that that had more to do with the perspective of the camera than the size of the trees themselves.

I’ve been messing around with the density of trees I put into the scene and the fog. The difficult thing about the fog is that it requires a precise density…too small and the fog will be barely there but even if you increase it just a bit, your scene will become consumed by fog.


I think the scene could do with a tad more fog, but this is about the tone I hope to convey. The blueish tint of the fog gives off an aura of mystery. I really hate the ground currently. Nothing I do looks natural! I’ve tried adding rocks and different textures to the ground, but it doesn’t help the trees fit into the ambience. I think I shall experiment with terrain to see if adding more levels makes the forest look better.

Actually, adding a different lightbox helped a bit. But definitely gave it a dark forest vibe.

Here I just increased the fog density by .01 and the difference is staggering.

Experimenting with a moonray directional light:

I had envisioned having a pond with mushrooms and magical fauna surrounding it. I wanted the mushrooms to glow, so I tried to add an emission to a new material but these made the mushrooms neon blue and far too bright. Thus, I adjusted the prefab by giving the shader a blueish color to give it a slight glow.

I think I will try to make a meditation forest now. I have a lot of trouble sleeping at night, so usually I end up listening to music to block out all the darting thoughts. But, my imagination usually tends to run wild all the same, so I wonder if having something visual to focus on would help me concentrate or sleep.

I think I shall keep my skybox and fog as is as I hope to still convey a nighttime ambience to remind myself of sleep. But, I now need elements that signal peace and tranquility to balance the darkness and barrenness of the trees. I did try putting leaves on many trees, but it didn’t support the aesthetic I was aiming for.

I experimented with a few things: waves, swaying flowers, mist, flying birds, and flying orbs of light. I wanted to keep my environment as uncluttered as possible, so I only chose my two favorites: the orbs and the waves.

With the orbs, I had to play around with the particle system to get the aesthetic I desired. Specifically, I played around the the color, the spread of the system, the size, speed, and direction of the individual particle, as well as the emission (glow).

Blog Post 1: An Unreliable Narrator


Mohammed kept trying to run me over with his tricycle.  He was only four, but was already the tyrant of his little kingdom of the couches in the main room of the house and the small television that only seemed to play Spanish soap operas dubbed in French.  When anyone tried to invade his kingdom, he blew fart sounds into his or her face.


He defended his territory in such a manner when his older sister entered the room, but immediately stopped when his father followed in after her.  She laid a cloth on the floor and beckoned me to squeeze soap onto my hands and rinse them in the bucket. We all sat upon the cloth surrounding the silver platter on the floor and waited for the father to begin eating.  They used their hands and I used a spoon and they kept passing the best bits of the fish or the crispiest rice to my side of the platter. Every time I put down my spoon, the family would ask why I was not eating, was the food good, and other entreaties that made me pick up the spoon once more until my bursting stomach begged me to stop and say “sourna,” a Wolof word indicating the end of a meal.  They insisted I sit on the best couch, deemed so because it was closest to the fan, and so I rested until Mohammed woke me up with his farting sounds.


The next day, I bid farewell and let Mohammed reign in peace.  I took a quatorze-places back to Dakar. Literally translated, quatorze-places means ‘14 places’ in French, but the term was a bit misleading because though there were 14 people squished into the van, there were not 14 places. As I sat in the back row of the vehicle with four other people and a bird, I thought about Mohammed and his family and the town where they lived on an island made of clamshells. I remembered drifting in and out of the streets near their home and sitting on a bench near the sea. The water scattered on and off the shore and when I looked up, I could see the cemetery where Muslims and Christians are buried together, outlined by bright blue lights. In a nearby house, a man played reggae on the guitar.

the island made of clamshells.

Three years later, I can still remember this music, the town, and Mohammed. Though this was a space I intersected less than 24 hours, I remember Joal-Fadiouth as a vibrant place for its serenity and tolerance. Because I spent such a short period of time there and had no purpose there other than to observe, it felt more like a dream than reality that my memory has surely overromanticized, prompting the question: does the transience of an experience shape our perception of it? In our reading, Immersion, Janet Murray describes immersion as being a “movement out of our familiar world, the feeling of alertness that comes from being in this new place” (Murray 1). This sense of immersion is what gave this space meaning, a specific emotional context, and made it a place that I will remember.