Final project: Lauren & Shenuka

For our final project, we’ve decided to create an environment where the player can move pebbles on the ground of a deserted island surrounded by sea, and having the stars up in the sky to reflect the movement of these pebbles as they are being moved. After a certain time has passed, the sun rises and sets, thus “renewing” the sky and giving it a new blank canvas for the player to create another constellation on.

This idea came from reading the chapter from Invisible Cities, where things happening on the ground affects what’s above it in a similar manner.

Here’s what we imagine the environment to look like:

courtesy of Shenuka Corea

*note: the sea surrounding the little deserted island gives necessary boundaries as to where the player is constrained in terms of space.

And a storyboard of how the world would work:

courtesy of Shenuka Corea

Our project aims to use the space, the objects within it, the relationship between them, and the sense of time in relation to cause and effect to convey “the stuff of story.” The interaction between the player and the world lends itself to discoveries and experiments.

Assets we’ll need:

  • stars in the sky
  • island terrain and water for the ground
  • night sky as skybox
  • animation of sun rising and setting to restart the sky

Interactions to design (+code):

  • moving of pebbles – objects with gravity, and responsive to where the player is moving them
  • similar reflection in the moving of stars
  • extra time element (delay) added to stars so they leave a trail behind as they’re moving into places corresponding to pebbles, creating the effect of shooting stars!

Sound/light:

  • ambient during nighttime – see first photo referenced above
  • calming sounds of ocean waves in the background
  • stars above sparkling a little

Project 2 Documentation. Don’t Feed The Plants.

This project was created together with Mai Lootah and Shenuka Corea. Since they were working more on the environment and textures and animation, I was working on the scripts and mostly will dicuss them.

Description:
This project is a greenhouse on an alien planet where you can see all sorts of alien plants. And when you place a seed in the pot and water it, something weird grows. Some plants make sound, like the one that is behind you and placed in a cage.
There were a total of two main interaction and some secondary interactions. The main interaction was tied to the seed. You can pick it up, its throwable, but the goal is to put it into the pot. Once the seed is in the pot, you can water it to make it grow. That is the second interaction which is tilting the watering jug so it could pour water. This script was a disaster since it was the first script I have ever written. The secondary interactions are the gardening instruments that can be picked up and thrown.

Implementation:
It started off with an empty terrain where we put an imported from the asset store greenhouse (costed is 1$) and made the outside tree meshes and grass. After the we have created a work station which included some benches from that same greenhouse asset and pots. Right after that we have added mesh colliders to every part of the work station and added a new asset which was the gardening tools. The main camera was set in front of the work station and it consisted of the “player” from the steamVR interaction example scene. After the environment was set by Shenuka and Mai, I have started working on the scripts. The first script was to make the particle system, which was attached to the water canister, play when tilted. This script used the transform.euler angle (took me a while to figure out) and when it is in the desired range, just run it. The next script was the collision detection of the seed and pot. That one was not hard: when the seed is on the soil of the pot, pass a “true” boolean value. The next script was to detect collision from the particle system to the seed. Not hard as well, same thing as with the seed and pot. Next is making a counter for the seed which will count the particles of the water only when the seed is in the pot. That is where we need the seed with pot collision script, and once there is enough water for the seed, it shrinks and the plant grows. This was another script named “growth”. If growth was true, the seed began to shrink and plant to grow. Those are all the scripts that were created to make our project 2.

Reflection:
Our goal was to make a greenhouse with a man-eater plant. You can find a clue about it if you look behind you (same plant in cage) and that it is dangerous. As it is seen on the pictures of the blog which was dedicated to expectations before the project was started, we have reached our expectations. The environment was really well made to understand that we are actually on an alien planet.

Storyboard before the project was created
Expectation before the project was created
First view of the greenhouse
Upgraded view of the greenhouse
The inside of the greenhouse
implementing the water pouring when tilted
Final space

Project 1 Documentation. Sunset Valley.

This project was made to show the environment that could be possibly perfect. The environment set as a goal was a sunset valley.

Description:
I have created a mountain area with trees. The main camera was set at the top of the mountain, not the highest, but then the ground level. From that point of you you could see a lovely sunset and some trees that are below you.

Process and implementation:
Using a terrain tool I have raised the terrain in some areas to make it look like mountains and using the brush tool I have added tree meshes and grass meshes. After that I have added some filters on the main camera to make the sunset look even better and move the camera using the “smooth mouse look” script. This made a wonderful representation of the sunset valley.

Reflection:
This places is build to make a person be immersed into a nice place where you can clear your thoughts and just relax. I have never seen a place with such sunset before personally, but now I have a goal to find something in real life that will look very similar.

Project 1 Documentation

Project 1 Development Blog Link: https://alternaterealities.nyuadim.com/2019/02/11/project-1-development-blog-5/

In this project, the goal I set out to accomplish was to create a peaceful environment. Even though the implementation of such differed in many aspects with my initial idea for the project,  the overarching goal of creating a peaceful environment was definitely accomplished. As such, I will use this piece to talk about the similarities and differences between my project’s ideation and its actual implementation, as well as the process that I went through to go from the former to the later.

Differences

If you look at my first entry in the development blog (link above), my initial idea had the user in the middle of the tent. Even though I could have accomplished this in my project, I decided that locating the user outside of the tent created a more meaningful impact given that the user can enjoy more of the scenery thanks to the 360 degree  look of the natural landscape I created.

I also didn’t add the snack assets into my project. I could not find any of the assets I wanted and instead I placed camping tools and wood logs all over my scene. Cutting wood logs was also something I used to do quite often in my camping trips, so this resulted in a really good alternative.

Similarities

All in all, I can say that the tent part of my idea was accomplished effectively. However, I didn’t expect to get so invested in designing the natural landscape. I spent more than 70% of my time placing trees and playing around with the mountain assets. I really liked this portion of my project, and it made me retrospectively analyze the importance I placed in nature. Whenever I went camping, I thought that the most enjoyable part of the experience was to spend time with my friends and that the landscape/scenery came in as an added bonus. After doing this project, I now know that the refreshing look of nature was more important than how I initially perceived it to be, and I hope that I value more this importance as I go on more camping trips.

Implementation:

As stated in the development blog, the design of the scenery was not the most strenuous part of the process. The most time-consuming part was making the executable file. However, looking my project in a Google cardboard Kit was worth it as it gave me a newfound appreciation of the scene view I created. Also, the terrain object proved to be really difficult to alter in the version of Unity I had installed in my computer. As such, I had to use a cube for my floor and then use a mountain asset to fill the user’s distant view with mountain tops.

Project 2 Documentation

What I have learned from this project

I have learned to build bigger environments more effectively. Before it was hard for me to plan out the environment and build it, but this time it was possible for me to see the environment three-dimensionally. It was definitely easier to build the environment

Limitations and Reflection

The project can be improved with these elements in the future

  • More complex objects
  • Having the trash react angrily if you pass without picking up
  • Having a message indicate the right place to recycle something if you don’t put it in the correct bin
  • Develop environment and interaction system more in tandem with each other

Moreover, we should have had more conversations about interactions and the environment. We faced some hardships when it came to adding interactions because we built the environment separately from interactions.

What I want to work on my next project

I would like to work with the interaction part and while working with it, I would like to work with the environment to make sure it is scaled properly. I want the user to feel like they are actually part of the environment. We want them to feel like they are in the right height, and right scale.

Here is the link to my presentation

Project #2 Documentation

Project Description

For project #2, our group decided to build an experience that tackles the problem of sustainability on campus. We wanted to base our scene on campus, and there are trash on the ground. In the real world, if someone passes by trash, and ignores it, there are no consequences. Besides, people all have the mind set that someone else will act on that. We wanted to raise people’s awareness of the NYUAD community by creating an alternate reality where if people walks by a piece of trash without picking it up, they will receive negative feedback indicating that they are not acting in a proper way.

Besides, because of the diversity of the community, there isn’t a shared standard for recycling which everyone agrees upon. Always having been such an ignorant person about the environment, I really get confused when I throw an empty Leban bottle: should I put it in general waste or plastics? The bottle is definitely recyclable, but only after I clean the bottle. Recycling can be extremely complicated: I still remember that I was extremely shocked when the RA told us that we should recycle the lid for Starbucks cups but throw the paper cup into general waste. By creating an educational environment which mimics what actually happens on campus, we hope to teach people how to recycle in an entertaining way. Through the repeated interaction within our scene, users might be able to perceive recycling as less burdensome as they get more familiar with it.

Process and Implementation

The tasks were divided up: Ju Hee and Lauren were in charge of the environment, while Simran and I were in charge of the interaction. After the environment was created, our scene looked like this:



When Simran and I started to work on the interaction with trash in our environment, we found a lot of problems with the environment. First, because we failed to set up our VR station when we first started the project, we didn’t have a sense of the size of our VR space and how it is reflected in Unity. If I were to figure out we need to set up the VR station beforehand before Lauren and Ju Hee started to build the environment, we could have saved a lot of time and energy from rescaling the space. The environment is too large, so that the movement of users are not significant: users can’t really tell that they are moving inside the environment. So we decided to do a teleporting. We divided our tasks: I will be mainly in charge of the teleporting, and Simran will focus on the interactions, but we are helping each other out throughout the process.

I went through several tutorials to understand how teleport in steamVR works in Unity. Here are the links to the tutorials: https://unity3d.college/2016/04/29/getting-started-steamvr/

https://vincentkok.net/2018/03/20/unity-steamvr-basics-setting-up/

At first, I decided to place teleport points next to each piece of trash, so that users can easily access the piece of trash by aiming at the right teleport point. Then I realized that since we have such a huge space, users would never be able to go to the areas where there is no trash, so I think it would be nice to have the whole space to be teleportable: users should be free to move in our space, and they also have the choice of going directly to the trash and complete the training if they are not interested in experiencing our VR campus.

Adding the teleporting object to the scene, setting up the teleport points in the environment, and attaching the TeleportArea script to the ground are easy. However, it becomes frustrating when we have to figure out the scale and the position of our camera. The environment was build in a way that the ground is not set at position (0, 0, 0), and the objects were not tightly attached to the ground. And when we do the teleport, we get teleported beneath the buildings

At first I tried to change the y-position of the camera, so that we actually view everything, but then after raising the camera, we are not able to see our controllers because they are so far away. Then I tried to raise the y-position of the player, but we are still teleported to a place below the ground. Then I figured, instead of making the ground teleportable, I can create a plane that is teleportable, and raise the plane a little bit. By doing that, I fixed the problem.

I also doubled the scale of everything so that the scale looks fine. Then we found several problems when we view the environment via the headset. First, the buildings, or some part of them, disappear when we looking at it.

Then I figured out that the nearest and farthest viewing distance should be adjusted according to the scale of our environment.

Another problem we encountered was how to get close to the trash. Because our scene is in such a huge scale, we cannot even touch the trash lying on the ground because they are so far away, so we decided to have the trash floating in the air, at approximately the same level of the teleport plane, so that users are able to grab them with the controllers. However, if we simply disable the gravity of the trash, they will fly away.

But then if we enable the gravity and kinematics at the same time, the trash won’t be throwable: it couldn’t be dropped into the trash bin. Then I searched online for the correct settings for the Throwable script in steamVR and also asked Vivian how her group did that. In order to make it work properly, we have to set in Regidbody “Use Gravity” to be true, and “Is Kinematic” to be false. Then for the Throwable scripts, we need to select “DetachFromOtherHand”, “ParentToHand” and “TurnOffGravity” for the attachment flags.

I also added the ambience sound to the scene, created the sound objects for positive and negative sound feedbacks, set up the sound script, and attached them properly to each sound object.

Reflection/Evaluation

One of the take-aways from this project is that for VR experience, the scene and the interaction cannot and should not be separated. After dividing the tasks up, Simran and I did not really communicate with Lauren and Ju Hee. Then we took over the already-made environment that was extremely large, and the objects in the scene were kind of off scale. We spent a lot of time fixing the scale of everything, and I felt really bad about not communicating with them beforehand. We could have saved a lot of time.

Another thing I should bear in mind for future projects is that we should never ignore that fact that the hardware might went done. We almost ran out of time when creating the interactions because the sensors kept disconnecting with each other and the controllers kept disappearing from the scene. We should have planned everything ahead rather than leaving everything to the last minute.

Overall, I enjoyed the process of learning from peers and from obstacles, and our project turned out to be nice: we didn’t expect users to be really engaged in our game and to have fun throwing trash.

Enjoying The Colors

1.Project Description: describe the space you created and the modes of interaction.

Junior, Claire, and I decided to create a realistic bathroom space in which the user could walk around in. We did have limited space as we used the front part of our classroom, but in a sense, the space limitation worked to our advantage. A regular bathroom is not that big, and so recreating the bathroom space within the limited space replicated the real life situation. While there is a bathtub in the corner, it is relatively the normal size, neither too small nor too big. We also decided to put a tower rack and some towels to show that it is a bathroom that is being used frequently, and not a sample bathroom in a showcase. The big shelf was added to place one of the glasses that would be used for the interaction. And of course, the toilet is placed in the corner to emphasize the fact that the user is in a bathroom. There is also the wide sink with the mirror on top of it which we intentionally chose to match the overall atmosphere of the bathroom.

Overview of the Scene

We used the grab and select tool using the Vive controller’s trigger button as the mode of interaction. By hovering the controllers over the glasses, the user can click on it using the controller’s trigger, which allows them to “take a look” through that specific glass. This means, when the user choses a glass that has the function to make everything look red, the user will see everything in red after selecting those glasses. Another interaction would be walking in the virtual bathroom setting. By calibrating the Vive headset, the user can walk freely inside the bathroom, and look closely at the various objects.

2. Process and Implementation: discuss how you built the scene within the development environment and design choices your group made What did the brainstorming process involve? How did you go about defining what would be everyday in this world? What were the steps to implementing the interaction? Share images or sketches of visual inspiration or reference, including the storyboard/layout.

As for the brainstorming process, Junior, Claire, and I met and discussed what kinds of daily life situations can be replicated in an interesting way. As much as we knew that we had to replicate some kind of daily life situation, we wanted to use the full potential of virtual reality. We talked about how “sight” is an essential part of life and that having bad eyesight can sometimes be a barrier when wanting to examine everything carefully. We thought that experimenting with “sight” would be our main theme. We then discussed how the user can be given a task to find and try on the different glasses in a space. That way, the user can interact with the objects in the space (glasses) and go through the different experiences. We also discussed where we wanted the setting to be. The different ideas were the living room (common room) of a share house, the user’s own room, and the bathroom. We thought that the common bathroom would be the most realistic since people can leave their glasses behind in the common bathroom and forget and have to come back to find their glasses.

We used various assets from Unity’s asset store and created the bathroom setting. While there are a variety of assets in the Unity’s asset store, the ones that looked the most sophisticated and appealing were not free. Therefore, we had to scavenge through the free assets in order to create the bathroom. We gathered different assets packages and played around by placing the objects that were related to the bathroom. For example, we tried different iterations of the sink to see what actually fit the space, mood and interior design. The first sink that we originally put seemed to be too bland, and after some experimentation, we decided to settle down with the current sink.

3. Reflection/Evaluation: This should discuss your expectations and goals in the context of the what you felt was achieved with the finished piece.

Originally, we wanted to add more components to the current piece. However, none of us had experience with using Unity before this class and so there were many components that we had to learn. Our expectation was that we would have each glasses have its own filter, and we would place different scripts on each of the glasses to possess different effects. Our original idea was to have one glasses to zoom in, another one to zoom out, another one to have double-layer vision, and the real one that would give the corrected vision. However, figuring out how to create all of these different kinds of vision took so much of our time, that we needed to come up with a plan B in case we could not debug the scripts that we had written.

In the end, we settled with the idea of having each glasses possess the power of switching scenes. We switched our main theme to “experiencing the colors” and so we decided to blur the vision a little and have the user focus more on the color change. When one glasses was selected, the user will “put on” those glasses, and see the objects in red. Then, if the user “put on” another glasses, the user will see the objects in green.

Red Scene

Blue Scene

Green Scene

We were able to achieve the basic of what we wanted to achieve, in the sense that we wanted to provide the user of being in a bathroom setting, trying on the different glasses, and allowing the user to experience the different vision with the different glasses. Although the different visions were a little different from the original idea, the way we dealt with trying to recreate the effect in a different manner was a result of our good teamwork.

I would say that the most difficult task in this project was selecting the glasses and placing the script in the object that would jump to different scenes. Because we had the idea of each glasses having its own filter, we needed to create different iterations of the same setting in order to augment that effect. Moreover, our group only had three (two) members, and so we struggled in regards to knowledge and implementation compared to other groups who had four members.

Midterm project: final documentation

Recycling @NYUAD is a project that strives to bring awareness to the environment and waste issue at NYU Abu Dhabi. It seems that many of our students lack the knowledge of recycling properly and frequently, so we want to use this project to address that problem.

The look & feel:

the environment that I made!
look from another angle

One important decision we made in terms of the campus was to focus on one portion of it – not too much space, but enough to move around, be able to see trash littered around, but at the same time exude a sense of a closed space so players don’t wander around and stay within that designated area. Our set area was the space right outside D2, where there are grass patches and A6 and The Arts Center are on the side.

My job was to create the actual space. I took references from taking photos in the actual space and also looking at Google Map to see what the player would actually be seeing if they were standing in that space.

Initially I tried looking for prefabs that could be used for this project but because our campus is very unique in design, it was difficult to find anything similar. So I started building them from scratch on Unity using 3D shapes. The key was to layer them together to mimic the building structures and add elements for details.

On my part, I’m pretty satisfied with how the environment turned out. It was my first time building assets from scratch and it took a lot of trials and errors, but I enjoyed the process and liked how it turned out. I also spent a while experimenting with different skyboxes and eventually settled on a bright, cloudy sky look, which fit the environment quite well. The main things I learned in the process of building the 3D space were 1) using the right color, 2) getting the relative size of buildings correct, 3) adding small but important details that can make that space look more realistic and accurate.

After I completed all buildings and the environment was finished, I passed it onto Ju Hee, who incorporated prefabs of objects that populate the space, such as chairs, tables, and trash.

For the interaction, Simran and Yufei worked on how the player would pick up the trash. The pieces of trash glow yellow when the player is nearby, indicating that they can pick them up, which they can then dump them in the recycling bin. A sound plays if it is recycled properly and another sound plays if it’s not.

In reflection, if we had more time I think we could have worked more on making the interaction more sophisticated – for instance, making the trash come to live and react angrily if the player chooses to ignore it and not pick it up to recycle it. It could shake and make a roaring sound until the player actually picks it up. I think this would have made the experience more engaging and interesting. Making the trash come more alive would also be taking advantage of VR as a medium as it’s not bound by how things work in the real world.

We also had issues re-styling the environment for the interaction as the space itself was pretty big. Looking back, I think we could have spent more time trying to adjust the size and scale more.

I would also work more on the space, decorate the buildings a little more, and maybe even add animations of people sitting around and chatting to each other near the Dining Hall. All of these contributions would add to the experience when the player is in the space, making it engaging and immersive.

After user-testing & presentation in class:

I was very delighted to find that a lot of my classmates found our project very fun to play. To our surprise, people started throwing the trash around to see if they can throw it into the trash can from afar. It was interesting to see how our supposed weakness of having a huge space contributed to the fun element. Moving on, we could make use of this feature – if the player throws the trash from afar and fails to throw it into the trash can, it comes flying back and doubles in number! To add to the educational element, we could also have words pop up onscreen, giving numbers and facts about our waste control at NYU Abu Dhabi and how different kinds of trash should be properly recycled.

I was also pleased that people found the environment very familiar. I spent a lot of time trying to build The Arts Center, The Dining Hall, A6 as well as the grass patches from scratch, so it was very rewarding to hear my friends telling me that they could immediately recognize the space.

Documentation – Don’t feed the plants !

Original Idea

Our idea was to create an environment that places the player inside a greenhouse, surrounded by plants and the sound of the rainforest, with a planting station in front of them with several tools to choose from. The main items to interact with were the pot, seed, and watering can. The twist would be what would grow from this seed. And what was awaiting their discovery behind them.

Our “everyday action” that we played around with was the action of gardening, but in this world what was planted was a seed of a carnivorous plant. And if looked behind, the player realizes that this plant is one that must be contained in a cage for the safety of the people in this world.

Storyboard;

The player would find themselves in front of a planting platform with a pot, seed and watering can by their reach. The player would them pick the seed place it in the pot and water it. As they water it, the seed would shrink till it disappears and a carnivorous plant would grow in its place and start trying to attack the player.

When the player starts looking around their environment, they would be able to notice a large butterfly flying outside above them, the shadow of the insect on the ground would indicate the player there is something above them.

Different perspectives

Assets

While creating the environment, we found ourselves at the luxury of having a variety of prefabs to choose from;

The main asset we used was a $1 greenhouse (Green House 3D Model) furnished with a table, several pots, and a hanging lamp. At first the greenhouse was well, green and then we changed the color to have it be white to fit the image we had of it being a victorian inspired greenhouse. We separated the items (pots and table) to make it easier on us to choose from them what we required to have placed inside. And when designing the inside of the greenhouse we placed pots around the player, some were empty and some housed plants taken from the enemy plants package which came with animations.

The other asset we used an abundance of was the asset of UniTrees 3; which included detailed, fantasy trees, plants, and bushes.

What was learned?

When creating the outside environment, we found an easier method to place multiple items that are the same. By adding to the terrain the object of the trees and plants; and having the brush place them randomly and be a part of the terrain. What was also learned was when we came across the resizing of the butterfly prefab; whenever it would play its animation the size of the butterfly would shrink back to the original. When going through the tabs of the prefab, you must not only resize the object but its animation as well.

Interaction

Main interaction

Limitations and Reflection

With the time limitation, we found it difficult to implement several pots and seeds for the user to interact with so we ended up on settling on one pair.

Another difficulty we faced was how when items were grabbed, the mesh becomes nonexistent and items such as the watering can goes through the pot. We tried multiple settings and options, but as we kept trying it would get worse and eventually the hands in the game just deleted completely. Max had to create a new project and re-import steamVR for it to go back to normal.

Reflecting on the final scene, it was quite satisfying to come pretty close to our original idea. And when including the butterfly overhead; it gave the player a sense of miniature size compared to what is outside this greenhouse. And also a sense of the dangerous outdoors.

The final thing we added was the audio, extracting audio files from free sound.org we found several sounds of the forest; that included birds, wildlife, and wind. When including that file to our scene, it immerses the player in the game and gives them feedback another one of their senses. We included three other sound files as well, one for the caged plant located behind the player, and that sounds setting was put at a 3D sound and that means that if the sound is coming from the right, the player would hear it from the right earphone and vice versa. The plant that the player grows also will play an audio file of low growling to add a scary factor to these plants. And the watering can also has a water audio file attached to it.

What we hoped to include

Having such an environment allows us to the freedom of expression, and the freedom to add whatever our mind imagines. We initially hoped we could give the player choices of seeds that they can plant, and each growing a different plant. And the main plants interaction, we could’ve had its animation actually feel more like it attacked the player and thus ending the game. But if the game ended there; the player might not be able to have time to fully enjoy the 360 view of the environment entirely. Allowing the player to fight back against the plant would immerse the individual further into the environment giving them a way to react to what is happening. Although the scene we created is pretty detailed, having animated animals outside and a running river would further give life to the location would be nice but it was quite difficult finding a well suited animated animal to include.