Development Blog Project 2 (Interactions)

Initial Post: https://alternaterealities.nyuadim.com/2019/03/04/project-2-storyboard-claire-junior-atoka/

March 5, 2018

For our project, we plan on implementing two interactions that makes the user fully delve into our virtual reality world . The first one is the ability to grab the glasses. The second one is to make the camera view blurry. We plan on intertwining both experiences by changing the camera view as the user grabs the glasses. Given the explorative and exciting nature of the project, I volunteered to work on the interactions as I believe this will be an opportunity to learn more about c# scripting for Unity while my partners are interested more in the design of the scene.

March 13, 2018

Now that the scene is completed, it is my job to implement the interactions necessary for our project. As I started researching and exploring more about Unity, I concluded that there are two approaches that can be taken to achieve the interactions. The first one is to use the scripts that come with SteamVR’s “Interactions” scene and then build upon them. The second one is to write the scripts from scratch by following tutorials online. I decided that the former is more feasible than the later given the timeframe imposed on us. Therefore, I deleted the camera Object that comes with Unity and I pasted the Player prefab that comes with steamvR into our hierarchy given that SteamVR’s scripts are fully compatible with the SteamVR’s player prefab. I then grabbed the following scripts from Steam VR and included them into my glasses:

  • Velocity Estimator
  • Interactable
  • Interactable Hover Events
  • Throwable
Scripts borrowed from SteamVR

These scripts gave the glasses a lot of capabilities. First off, they light up whenever they are touched by the  controllers. And after they are grabbed, the scripts allow the users to throw the glasses and they are affected by gravity which means that they fall accordingly and interact with floor once they land.

March 15, 2018

After talking with professor Sarah Fay Krom, I am now able to use the post-processing stack that comes with Unity. This will allow me to change the camera view and make it blurry. The post-processing stack generates a script that can then integrated into the camera. This script can be modified through an easy-to-use panel that appears in the hierarchy which allows you to change things like: depth of field, blur, motion blur, color grading, etc.

Post Processing Stack recommendations by Professor Krom

Representation in VR

One of the most effective representation in VR is letting interaction that is not possible in real life happen. Learning a language can be difficult. We find it hard to practice it, especially due to the fact we usually don’t have a person around us speaking the language we are trying to learn. However, one of the most effective ways to learn language is to talk to other people.

There is a VR application that lets you learn languages through VR experience. This will let people have the interaction that we lack when we learn new languages through books. Because lots of people find learning a new language difficult, I believe that representation of VR in this area would be very helpful.

Project #2 Documentation

Project Description

For project #2, our group decided to build an experience that tackles the problem of sustainability on campus. We wanted to base our scene on campus, and there are trash on the ground. In the real world, if someone passes by trash, and ignores it, there are no consequences. Besides, people all have the mind set that someone else will act on that. We wanted to raise people’s awareness of the NYUAD community by creating an alternate reality where if people walks by a piece of trash without picking it up, they will receive negative feedback indicating that they are not acting in a proper way.

Besides, because of the diversity of the community, there isn’t a shared standard for recycling which everyone agrees upon. Always having been such an ignorant person about the environment, I really get confused when I throw an empty Leban bottle: should I put it in general waste or plastics? The bottle is definitely recyclable, but only after I clean the bottle. Recycling can be extremely complicated: I still remember that I was extremely shocked when the RA told us that we should recycle the lid for Starbucks cups but throw the paper cup into general waste. By creating an educational environment which mimics what actually happens on campus, we hope to teach people how to recycle in an entertaining way. Through the repeated interaction within our scene, users might be able to perceive recycling as less burdensome as they get more familiar with it.

Process and Implementation

The tasks were divided up: Ju Hee and Lauren were in charge of the environment, while Simran and I were in charge of the interaction. After the environment was created, our scene looked like this:



When Simran and I started to work on the interaction with trash in our environment, we found a lot of problems with the environment. First, because we failed to set up our VR station when we first started the project, we didn’t have a sense of the size of our VR space and how it is reflected in Unity. If I were to figure out we need to set up the VR station beforehand before Lauren and Ju Hee started to build the environment, we could have saved a lot of time and energy from rescaling the space. The environment is too large, so that the movement of users are not significant: users can’t really tell that they are moving inside the environment. So we decided to do a teleporting. We divided our tasks: I will be mainly in charge of the teleporting, and Simran will focus on the interactions, but we are helping each other out throughout the process.

I went through several tutorials to understand how teleport in steamVR works in Unity. Here are the links to the tutorials: https://unity3d.college/2016/04/29/getting-started-steamvr/

https://vincentkok.net/2018/03/20/unity-steamvr-basics-setting-up/

At first, I decided to place teleport points next to each piece of trash, so that users can easily access the piece of trash by aiming at the right teleport point. Then I realized that since we have such a huge space, users would never be able to go to the areas where there is no trash, so I think it would be nice to have the whole space to be teleportable: users should be free to move in our space, and they also have the choice of going directly to the trash and complete the training if they are not interested in experiencing our VR campus.

Adding the teleporting object to the scene, setting up the teleport points in the environment, and attaching the TeleportArea script to the ground are easy. However, it becomes frustrating when we have to figure out the scale and the position of our camera. The environment was build in a way that the ground is not set at position (0, 0, 0), and the objects were not tightly attached to the ground. And when we do the teleport, we get teleported beneath the buildings

At first I tried to change the y-position of the camera, so that we actually view everything, but then after raising the camera, we are not able to see our controllers because they are so far away. Then I tried to raise the y-position of the player, but we are still teleported to a place below the ground. Then I figured, instead of making the ground teleportable, I can create a plane that is teleportable, and raise the plane a little bit. By doing that, I fixed the problem.

I also doubled the scale of everything so that the scale looks fine. Then we found several problems when we view the environment via the headset. First, the buildings, or some part of them, disappear when we looking at it.

Then I figured out that the nearest and farthest viewing distance should be adjusted according to the scale of our environment.

Another problem we encountered was how to get close to the trash. Because our scene is in such a huge scale, we cannot even touch the trash lying on the ground because they are so far away, so we decided to have the trash floating in the air, at approximately the same level of the teleport plane, so that users are able to grab them with the controllers. However, if we simply disable the gravity of the trash, they will fly away.

But then if we enable the gravity and kinematics at the same time, the trash won’t be throwable: it couldn’t be dropped into the trash bin. Then I searched online for the correct settings for the Throwable script in steamVR and also asked Vivian how her group did that. In order to make it work properly, we have to set in Regidbody “Use Gravity” to be true, and “Is Kinematic” to be false. Then for the Throwable scripts, we need to select “DetachFromOtherHand”, “ParentToHand” and “TurnOffGravity” for the attachment flags.

I also added the ambience sound to the scene, created the sound objects for positive and negative sound feedbacks, set up the sound script, and attached them properly to each sound object.

Reflection/Evaluation

One of the take-aways from this project is that for VR experience, the scene and the interaction cannot and should not be separated. After dividing the tasks up, Simran and I did not really communicate with Lauren and Ju Hee. Then we took over the already-made environment that was extremely large, and the objects in the scene were kind of off scale. We spent a lot of time fixing the scale of everything, and I felt really bad about not communicating with them beforehand. We could have saved a lot of time.

Another thing I should bear in mind for future projects is that we should never ignore that fact that the hardware might went done. We almost ran out of time when creating the interactions because the sensors kept disconnecting with each other and the controllers kept disappearing from the scene. We should have planned everything ahead rather than leaving everything to the last minute.

Overall, I enjoyed the process of learning from peers and from obstacles, and our project turned out to be nice: we didn’t expect users to be really engaged in our game and to have fun throwing trash.

Enjoying The Colors

1.Project Description: describe the space you created and the modes of interaction.

Junior, Claire, and I decided to create a realistic bathroom space in which the user could walk around in. We did have limited space as we used the front part of our classroom, but in a sense, the space limitation worked to our advantage. A regular bathroom is not that big, and so recreating the bathroom space within the limited space replicated the real life situation. While there is a bathtub in the corner, it is relatively the normal size, neither too small nor too big. We also decided to put a tower rack and some towels to show that it is a bathroom that is being used frequently, and not a sample bathroom in a showcase. The big shelf was added to place one of the glasses that would be used for the interaction. And of course, the toilet is placed in the corner to emphasize the fact that the user is in a bathroom. There is also the wide sink with the mirror on top of it which we intentionally chose to match the overall atmosphere of the bathroom.

Overview of the Scene

We used the grab and select tool using the Vive controller’s trigger button as the mode of interaction. By hovering the controllers over the glasses, the user can click on it using the controller’s trigger, which allows them to “take a look” through that specific glass. This means, when the user choses a glass that has the function to make everything look red, the user will see everything in red after selecting those glasses. Another interaction would be walking in the virtual bathroom setting. By calibrating the Vive headset, the user can walk freely inside the bathroom, and look closely at the various objects.

2. Process and Implementation: discuss how you built the scene within the development environment and design choices your group made What did the brainstorming process involve? How did you go about defining what would be everyday in this world? What were the steps to implementing the interaction? Share images or sketches of visual inspiration or reference, including the storyboard/layout.

As for the brainstorming process, Junior, Claire, and I met and discussed what kinds of daily life situations can be replicated in an interesting way. As much as we knew that we had to replicate some kind of daily life situation, we wanted to use the full potential of virtual reality. We talked about how “sight” is an essential part of life and that having bad eyesight can sometimes be a barrier when wanting to examine everything carefully. We thought that experimenting with “sight” would be our main theme. We then discussed how the user can be given a task to find and try on the different glasses in a space. That way, the user can interact with the objects in the space (glasses) and go through the different experiences. We also discussed where we wanted the setting to be. The different ideas were the living room (common room) of a share house, the user’s own room, and the bathroom. We thought that the common bathroom would be the most realistic since people can leave their glasses behind in the common bathroom and forget and have to come back to find their glasses.

We used various assets from Unity’s asset store and created the bathroom setting. While there are a variety of assets in the Unity’s asset store, the ones that looked the most sophisticated and appealing were not free. Therefore, we had to scavenge through the free assets in order to create the bathroom. We gathered different assets packages and played around by placing the objects that were related to the bathroom. For example, we tried different iterations of the sink to see what actually fit the space, mood and interior design. The first sink that we originally put seemed to be too bland, and after some experimentation, we decided to settle down with the current sink.

3. Reflection/Evaluation: This should discuss your expectations and goals in the context of the what you felt was achieved with the finished piece.

Originally, we wanted to add more components to the current piece. However, none of us had experience with using Unity before this class and so there were many components that we had to learn. Our expectation was that we would have each glasses have its own filter, and we would place different scripts on each of the glasses to possess different effects. Our original idea was to have one glasses to zoom in, another one to zoom out, another one to have double-layer vision, and the real one that would give the corrected vision. However, figuring out how to create all of these different kinds of vision took so much of our time, that we needed to come up with a plan B in case we could not debug the scripts that we had written.

In the end, we settled with the idea of having each glasses possess the power of switching scenes. We switched our main theme to “experiencing the colors” and so we decided to blur the vision a little and have the user focus more on the color change. When one glasses was selected, the user will “put on” those glasses, and see the objects in red. Then, if the user “put on” another glasses, the user will see the objects in green.

Red Scene

Blue Scene

Green Scene

We were able to achieve the basic of what we wanted to achieve, in the sense that we wanted to provide the user of being in a bathroom setting, trying on the different glasses, and allowing the user to experience the different vision with the different glasses. Although the different visions were a little different from the original idea, the way we dealt with trying to recreate the effect in a different manner was a result of our good teamwork.

I would say that the most difficult task in this project was selecting the glasses and placing the script in the object that would jump to different scenes. Because we had the idea of each glasses having its own filter, we needed to create different iterations of the same setting in order to augment that effect. Moreover, our group only had three (two) members, and so we struggled in regards to knowledge and implementation compared to other groups who had four members.

Midterm project: final documentation

Recycling @NYUAD is a project that strives to bring awareness to the environment and waste issue at NYU Abu Dhabi. It seems that many of our students lack the knowledge of recycling properly and frequently, so we want to use this project to address that problem.

The look & feel:

the environment that I made!
look from another angle

One important decision we made in terms of the campus was to focus on one portion of it – not too much space, but enough to move around, be able to see trash littered around, but at the same time exude a sense of a closed space so players don’t wander around and stay within that designated area. Our set area was the space right outside D2, where there are grass patches and A6 and The Arts Center are on the side.

My job was to create the actual space. I took references from taking photos in the actual space and also looking at Google Map to see what the player would actually be seeing if they were standing in that space.

Initially I tried looking for prefabs that could be used for this project but because our campus is very unique in design, it was difficult to find anything similar. So I started building them from scratch on Unity using 3D shapes. The key was to layer them together to mimic the building structures and add elements for details.

On my part, I’m pretty satisfied with how the environment turned out. It was my first time building assets from scratch and it took a lot of trials and errors, but I enjoyed the process and liked how it turned out. I also spent a while experimenting with different skyboxes and eventually settled on a bright, cloudy sky look, which fit the environment quite well. The main things I learned in the process of building the 3D space were 1) using the right color, 2) getting the relative size of buildings correct, 3) adding small but important details that can make that space look more realistic and accurate.

After I completed all buildings and the environment was finished, I passed it onto Ju Hee, who incorporated prefabs of objects that populate the space, such as chairs, tables, and trash.

For the interaction, Simran and Yufei worked on how the player would pick up the trash. The pieces of trash glow yellow when the player is nearby, indicating that they can pick them up, which they can then dump them in the recycling bin. A sound plays if it is recycled properly and another sound plays if it’s not.

In reflection, if we had more time I think we could have worked more on making the interaction more sophisticated – for instance, making the trash come to live and react angrily if the player chooses to ignore it and not pick it up to recycle it. It could shake and make a roaring sound until the player actually picks it up. I think this would have made the experience more engaging and interesting. Making the trash come more alive would also be taking advantage of VR as a medium as it’s not bound by how things work in the real world.

We also had issues re-styling the environment for the interaction as the space itself was pretty big. Looking back, I think we could have spent more time trying to adjust the size and scale more.

I would also work more on the space, decorate the buildings a little more, and maybe even add animations of people sitting around and chatting to each other near the Dining Hall. All of these contributions would add to the experience when the player is in the space, making it engaging and immersive.

After user-testing & presentation in class:

I was very delighted to find that a lot of my classmates found our project very fun to play. To our surprise, people started throwing the trash around to see if they can throw it into the trash can from afar. It was interesting to see how our supposed weakness of having a huge space contributed to the fun element. Moving on, we could make use of this feature – if the player throws the trash from afar and fails to throw it into the trash can, it comes flying back and doubles in number! To add to the educational element, we could also have words pop up onscreen, giving numbers and facts about our waste control at NYU Abu Dhabi and how different kinds of trash should be properly recycled.

I was also pleased that people found the environment very familiar. I spent a lot of time trying to build The Arts Center, The Dining Hall, A6 as well as the grass patches from scratch, so it was very rewarding to hear my friends telling me that they could immediately recognize the space.

Project #2 Development Blog

For project #2, our group decided to build an experience that tackles the problem of sustainability on campus. We wanted to base our scene on campus, and there are trash on the ground. In the real world, if someone passes by trash, and ignores it, there are no consequences. Besides, people all have the mind set that someone else will act on that. We wanted to raise people’s awareness of the NYUAD community by creating an alternate reality where if people walks by a piece of trash without picking it up, they will receive negative feedback indicating that they are not acting in a proper way.

Besides, because of the diversity of the community, there isn’t a shared standard for recycling which everyone agrees upon. Always having been such an ignorant person about the environment, I really get confused when I throw an empty Leban bottle: should I put it in general waste or plastics? The bottle is definitely recyclable, but only after I clean the bottle. Recycling can be extremely complicated: I still remember that I was extremely shocked when the RA told us that we should recycle the lid for Starbucks cups but throw the paper cup into general waste. By creating an educational environment which mimics what actually happens on campus, we hope to teach people how to recycle in an entertaining way. Through the repeated interaction within our scene, users might be able to perceive recycling as less burdensome as they get more familiar with it.

Here is our storyboard, drawn by Lauren:

Some initial thoughts and designs:

Since the campus is kinda huge, and we only have limited space for interaction, we decided to limit our scene to the area (the small garden/square) between D2, A6 and the Arts Center. We think that this side is more populated, compared with the ERB side, since everyone comes to D2 for meals, and we can actually see Al Ain bottles and other trash lying on the benches.

In terms of feedback, we decided that when users get close enough to the trash, the trash will light up, indicating that users could interact with the object. When users put the trash into the correct trash bin, they will get a rewarding sound feedback. Otherwise, they will get a sound feedback that is negative, and will not be able to drop the trash into the trash bin.

For the ambience sound, we decided to use birds sound, because we actually have speakers attached to the palm trees on campus that play birds sound. This is not only a recreation of the campus environment, but also a sarcasm of how they try to make an illusion of something that does not exist on campus.

We split the work so that Ju Hee and I will be in charge of making the environment, while Lauren and Simran will be in charge of making the interactions.

During the class where we presented our initial ideas, Sarah suggested that the main focus of this project should be on the interaction. After discussing a bit on the environment, we decided that the scene does not have to be set on campus. I experimented with the environment and found a skybox of a block in Tokyo. Japanese society put a great amount of emphasis on recycling and eco-friendly lifestyle, so I think it make sense to place our interaction in Japan. But when we met again in class, we decided that we should still use the campus as our environment, and Lauren became passionate about making the environment, so she switched her task with mine.

Ju Hee and Lauren have built the environment, and our scene looks like this:

Documentation – Don’t feed the plants !

Original Idea

Our idea was to create an environment that places the player inside a greenhouse, surrounded by plants and the sound of the rainforest, with a planting station in front of them with several tools to choose from. The main items to interact with were the pot, seed, and watering can. The twist would be what would grow from this seed. And what was awaiting their discovery behind them.

Our “everyday action” that we played around with was the action of gardening, but in this world what was planted was a seed of a carnivorous plant. And if looked behind, the player realizes that this plant is one that must be contained in a cage for the safety of the people in this world.

Storyboard;

The player would find themselves in front of a planting platform with a pot, seed and watering can by their reach. The player would them pick the seed place it in the pot and water it. As they water it, the seed would shrink till it disappears and a carnivorous plant would grow in its place and start trying to attack the player.

When the player starts looking around their environment, they would be able to notice a large butterfly flying outside above them, the shadow of the insect on the ground would indicate the player there is something above them.

Different perspectives

Assets

While creating the environment, we found ourselves at the luxury of having a variety of prefabs to choose from;

The main asset we used was a $1 greenhouse (Green House 3D Model) furnished with a table, several pots, and a hanging lamp. At first the greenhouse was well, green and then we changed the color to have it be white to fit the image we had of it being a victorian inspired greenhouse. We separated the items (pots and table) to make it easier on us to choose from them what we required to have placed inside. And when designing the inside of the greenhouse we placed pots around the player, some were empty and some housed plants taken from the enemy plants package which came with animations.

The other asset we used an abundance of was the asset of UniTrees 3; which included detailed, fantasy trees, plants, and bushes.

What was learned?

When creating the outside environment, we found an easier method to place multiple items that are the same. By adding to the terrain the object of the trees and plants; and having the brush place them randomly and be a part of the terrain. What was also learned was when we came across the resizing of the butterfly prefab; whenever it would play its animation the size of the butterfly would shrink back to the original. When going through the tabs of the prefab, you must not only resize the object but its animation as well.

Interaction

Main interaction

Limitations and Reflection

With the time limitation, we found it difficult to implement several pots and seeds for the user to interact with so we ended up on settling on one pair.

Another difficulty we faced was how when items were grabbed, the mesh becomes nonexistent and items such as the watering can goes through the pot. We tried multiple settings and options, but as we kept trying it would get worse and eventually the hands in the game just deleted completely. Max had to create a new project and re-import steamVR for it to go back to normal.

Reflecting on the final scene, it was quite satisfying to come pretty close to our original idea. And when including the butterfly overhead; it gave the player a sense of miniature size compared to what is outside this greenhouse. And also a sense of the dangerous outdoors.

The final thing we added was the audio, extracting audio files from free sound.org we found several sounds of the forest; that included birds, wildlife, and wind. When including that file to our scene, it immerses the player in the game and gives them feedback another one of their senses. We included three other sound files as well, one for the caged plant located behind the player, and that sounds setting was put at a 3D sound and that means that if the sound is coming from the right, the player would hear it from the right earphone and vice versa. The plant that the player grows also will play an audio file of low growling to add a scary factor to these plants. And the watering can also has a water audio file attached to it.

What we hoped to include

Having such an environment allows us to the freedom of expression, and the freedom to add whatever our mind imagines. We initially hoped we could give the player choices of seeds that they can plant, and each growing a different plant. And the main plants interaction, we could’ve had its animation actually feel more like it attacked the player and thus ending the game. But if the game ended there; the player might not be able to have time to fully enjoy the 360 view of the environment entirely. Allowing the player to fight back against the plant would immerse the individual further into the environment giving them a way to react to what is happening. Although the scene we created is pretty detailed, having animated animals outside and a running river would further give life to the location would be nice but it was quite difficult finding a well suited animated animal to include.

Project 1 Documentation

What I have learned from this project

This was my first time building an environment on unity. My biggest issue when it came to building the environment was recognizing the three dimensional space. It was really hard to see where exactly lots of the objects were placed. Because I was so used to the two dimensional designs that I used to do, making a three dimensional space was difficult. I could not make good use of the total space. Instead all the objects were placed on one side of the environment. However, I was able to learn how to think of the space in a three dimensional way. I think this will help me in my future projects. I will be able to plan out the space better and place things better.

Limitations and Reflection

Biggest issue I had with the project was not considering the viewer. When I designed the place I was thinking of the space more as a picture of a place. I thought of it as drawing a painting instead of building a physical space. Because of this, when I was planning everything I did not consider the viewer or the viewer’s experience. The viewer rarely had any interaction with the space due to it. Moreover this was the biggest reason why I had issues with the camera.

I did have technical issues with setting the camera, but I was lost when I was thinking of where to put the camera. There was only one way to put the camera which was putting it in front of the scene, but once I did, there wasn’t any interaction factor to the space.

What I want to work on my next project

What I want to work on my next project is to build a good base for my space and to use all of the space possible. I want to build a solid plane and walls around the space so the viewer could have easier time understanding the place. I also want to try building different buildings on the plane.

Also I would like to add more elements to the environment. I want to make sure the user and turn around and see different objects around him/her instead of not having anything once they turn around.

Here is the link to my presentation about this project

Google Cardboard VR : Invasion

I have decided to try Invasion for my Google Cardboard experience. It was a very immersive experience but there were several factors that I thought was really significant.

As soon as I started watching the video, I was kind of confused. Nothing was really happening in the screen and I was looking around. However, something that really helped me figure out what was happening was sound. I heard sound and I looked around to see where the sound is coming from. Without the sound it would have been hard to figure out what exactly was happening. When the alien spaceship was showing up in the sky, the surrounding sound made the viewer look around. There was a sound that made me look around and look up to see what exactly was happening. It made me realize, that sound is as important as the environment itself when it comes to viewer experience.

Moreover, What I have realized is that when I am building environment, I do not get to use the whole space. I usually would use half of the space and not use the space behind the viewer. What I found interesting is that this VR experience lets the viewer explore and move around a lot. It was the usage of space that made the viewer, for example, myself to look around and fully experience it.

Also there were some parts where the characters approach the user. Personally I thought that was very adorable. Because of the interaction with the characters it made me feel like I was actually there with them instead of feeling like I am watching them from far away. The eye contact these characters make and the noises they make was very significant.