Final Project Dev Journal | Reflection

13/4

After a bit of brainstorming, Ganjina, Yeji, and I narrowed our ideas down to two, one idea that intersected apocalypse and Invisible Cities and the other based upon an invisible city. After receiving feedback on our ideas, we decided upon the latter, an experience expanding upon the city of “Valdrada” (Calvino 45-56). Practical limitations of the interactions in Google Cardboard make the former idea much less compelling.

According to Calvino, this city was “built. . . on the shores of a lake, with houses all verandas one above the other,” such that “the traveler . . . sees two cities: one erect above the lake, and the other reflected, upside down” (Calvino 45). Calvino goes on to describe it as “twin cities” that “are not equal,” for “every face and gesture is answered, from the mirror, by a face and gesture inverted, point by point” (45-46). In this way, they “live for each other . . . but there is no love between them” (46).

From these descriptions, we began to think about different stories that were emerge from, and environments which would embody, such a city. For the environment, we are currently leaning toward a tropical island style, though this may change as we work out the story more thoroughly. Furthermore, we may adopt a similar visual style to Kentucky Route Zero. The next step is to flush out the story in its entirety.

So far, we’ve established that the city contains two cities which have been affected by a large scale traumatic event. In the non-reflection city, the characters respond to the stress of such an event in ways which are ultimately positive, whereas in the world below, their responses reveal the darker aspects of humanity. In some way, these two worlds will interact visually, and perhaps causally.

As for the specific stories to which users will bear witness, I’ve so far thought of watching the relationship of two people. In the non-reflection city, users watch key moments of their lives which bring them to be the closest of friends, and in the reflection city, users watch them become enemies who kill eventually kill each other. The user might be able to influence the story by choosing which world is the physical one and which is the reflection.

28/4

Much has changed in the last two weeks. To start, our environment and story have shifted significantly. Shortly after finishing the last post, Yeji suggested a compelling environment inspired by Amsterdam, coupled with a story that revolves around the user exploring the meaning of reflection. We decided to develop and begin building that idea.

Pre-development Story Description

Yeji’s illustrations of the idea

There will be two cities, one we’ve been calling the “initial” city and the other we’ve been calling the “reflection” city. The user starts in the initial city, the houses are colorful and the sky is beautiful as the street lights start to come on and light up the city. People are walking around, and the atmosphere is cheerful. A pier stretches out onto the lake. The user can walk around the lake and talk with the NPCs wandering about their daily lives. Over time, a light will appear in the water at the end of the pier, drawing the user to it. When the user reaches the light, they find they can click on it, and suddenly the world shifts.

The user finds themselves in a city that is the reflection of the initial city. The building positions are flipped. The sky is dark. The houses are grey. No one is outside. A few houses have color, and a similar visual cue to the water, suggesting the user may interact with them. As the user approaches the homes, they can peer into the window to see an interaction reflecting a major event which has negatively affected the residents, something which those in the initial city spend their lives ignoring.

Development So-Far

Paper Prototyping

After establishing a general outline of the experience, we sketched paper prototypes for the play-testing session.

We received several major feedback points:

  • How will we guide the user to the pier? Dimming the sky to increase visual salience of the cue may help. Using NPC interactions and dialogue would be even better. Starting the user nearer to the pier might also help.
  • How are we moving? We could do automatic moving, like in TrailVR, or we could do GoogleMaps style point-and-teleport.
  • How much animation are we doing? Trying lots of animation means higher potential for making good use of VR storytelling space. It also means more overhead and more work.

Once we received feedback on our paper prototypes, we decided to pull assets before dividing up the work of building and finalizing the story.

Story

We shared a few ideas at the end of a meeting, and then Yeji took the lead on the story until we could meet again to finalize it. As of now, it is still being flushed out. The “big event” idea mentioned earlier has been located in the idea of fear of the unknown outside world. The city, as depicted, will be entirely enclosed around the lake. The reason for this peculiar design will likely guide the tension of the story. The story is also emerging from an asset pack of fantasy characters that we picked up.

Environment

Ganjina started building the environment based on the drawings, and a few thing have changed as a result of the assets we are using. First, the “lake” is square. Second, the pier is a bridge, though this may change. Otherwise, as of now the first draft of the initial city is done.

Dialogue System

For the dialogue, we considered doing voice overs but ultimately thought that doing a text-based system would be more interesting to try and be much more scalable. I was given the task of starting to build the dialogue system.

The first step was to build a panel for all of the UI to go on. I knew that I wanted to build a panel based on the optimal ergonomic VR workspace areas defined by Mike Alger in 2015. For that, I would need a structural framework which could take real-world inputs. Unfortunately, my skills with Unity are not such that I could satisfy my own need for accuracy. Luckily, I’ve been working in Fusion 360 a ton for Machine Lab, and I was able to build my own panel.

There are two versions. The first is a spherically-curved panel. I estimated 2/3 arm reach distance as 2/3 of the FOV of Google Cardboard, roughly 60 degrees. I then took the 15 to 50 degrees below eye level, and intersected the sketches with a body.

spherical, curve panel

However, after I implemented this, I realized that the asset pack I was using, Easy Curved UI Menu for VR, could only curve around a cylinder, that is, around one axis. Matching that up against the optimal spherical curvature was sub-par, so I built another one that only curved around one axis.

cylindrical, curve panel

After working with the Easy asset, I realized it would never do what I wanted. I could not get the curved menu to rotate at 32.5 degrees (the mid-curve of the optimal angles). The asset pack essentially takes a flat panel, cuts it up into a bunch of rectangles and looks at them through a rendering camera, generating a curved plane in the process. Unfortunately, every time the curved menu generates on play, it resets its rotation.

easy UI refusing to rotate to match the frame

I did some research and found a method with UI Extensions. That worked great. I matched up the bezier curves, and moved on.

fitting the UI panel with the model panel

From there I focused on making the panel translucent when no dialogue was present. To do this, I had to drop the background image and kept the 3D model I had made in the environment instead. I also kept receiving no errors and no functionality whenever I implemented the GoogleVR reticle system, so I built a reticle and raycasting scripts based on class examples. By the writing of this post, this is where I am at in the overall system:

The panel is translucent when not in use, but present so it does not jar the user when it appears. The reticle changes colors when it hovers over an object makred “NPC,” and clicking on an NPC changes the panel to more opaque and brings about the dialogue related to that NPC. When the user looks clicks away from the NPC and clicks, the dialogue disappears, and the panel becomes more translucent again.

Dialogue System To-do

  1. script user’s ability to scroll through several panels of dialogue
  2. test with multiple dialogues
  3. test with multiple NPCs
  4. script automatic movement
  5. script movement restriction upon dialogue initiation
  6. script movement freeing upon dialogue ending
  7. bring it into the larger build
12/5

The Play Tests

Shortly after my last post, Yeji had extensive dialogue written, and we all met to finalize the story. Ganjina continued to push through on the environment, finishing up both the initial and reflection cities. With the dialogue system I created a series of demos.

The first demo was not much more than the system, a way of explaining how it worked to the user for feedback during our very first play testing system.

We received feedback to change the reticle so it was more noticeable, to speed up the walk speed, and to decrease or eliminate the delay between clicking and the starting of walking. During the play test, we also explored how to drive our story forward, several ways were proposed. We could used the dialogue by restricting or obscuring actions until the player made progress. We could also use the time of day to push the user to move on from the daylight scene.

Using this feedback, as well as dialogue developed my Yeji, I built a second tutorial which demoed the way we decided to organize the characters and interactions in order to drive the story forward. At this point, Yeji jumped on working on animating the characters while Ganjina put the finishing touches on the environment.

Using real characters and real dialogue created some design problems. While there were many small ones, one of the larger ones was how to trigger the indicator that the NPC could be interacted with on the front and back end. Using the GVR reticle pointer system, I set the distance to 2.5. I then set up a collision detection system with spheres around each NPC. Done, right? No. The player would have to scroll through the dialogue when inside the sphere collider by clicking on the dialogue panel. However, because they were still inside the sphere, the raycast would detect a hit on the NPC, not the dialogue panel.

pseudo-cylinder colliders below dialogue panel but tall enough to detect player

I put the sphere colliders on children of the NPC game object and set those children to the layer “Ignore Raycast.” Nope, same problem, most likely because the box collider on the body still had to be clickable to activate the NPC interaction, so I could not put it to “Ignore Raycast.” My final solution was a compound collider made up of four box colliders set at 0, 22.5, 45, and 67.5 rotations of the y-axis around each character. The created a pseudo-cylinder that rose high enough to detect the player controller entering them but low enough that they would not encounter the dialogue panel. This worked fairly well, but I did not figure out that “Ignore Racast” existed until after the second demo was ready, so the dialogue panel was very buggy in this demo. Similar problems with the scene-switch button meant play testers had to run through multiple times to encounter all the NPCs.

An additional problems at this time was a lack of order in the night scene. Users could just go up to anyone. The GVR reticle was also not accurately indicating when the user could activate a component.

Because of these problems, we received limited feedback. There needed to be an indicator that there was more dialogue when there was more dialogue and not more dialogue when there was not. For this, I added next and stop buttons to appear at the appropriate moments on the panel (see video at end) One play tester also suggested bringing the dialogue box up and to the side rather than down low. However, the positioning of the panel should be based on experience in VR rather than on a computer. While a panel low might be annoying on a computer, looking down at something in your hands or near chest height to read information about something in front of you is a fairly common interaction. It was something I decided to try at the end if we had time, and we did not. One piece of feedback that was implemented after this session was the change from a panel that switches between opaque and translucent and one that switches between active and inactive, minimizing its on-screen time.

Final Development Stages

At this point, Ganjina had finished the environment and started finding sound files and testing ways to switch from one scene to the next. Yeji would orient the characters in the completed environment and determine their interactions. These scenes would be sent to me, and I would integrate the dialogue system into the animations. This was the most difficult part. On the back end, my limited experience with C# made ensuring that everything would trigger exactly how we wanted, when we wanted, difficult.

I could write a small book on all the little things that went wrong during this. What was probably the driver of many of them was the fact that the two scenes had reflected some things and not others, and one of those things was the user’s agency. In one, the user commands the NPCs; they attend to the user upon the user’s request and guide the user to where they want to go. In the other, the user witnesses the NPCs: they ignore the user and let the system explain where they have to go.

I did the second, darker scene, which we started calling “night scene,” first. In this one, every scene’s dialogue had to be broken up, with individual pages triggering and being triggered by different animations and sounds. Also, it took a while to figure out that if I wanted the user to adhere to a certain order with a plague of null references, all I had to do was avoid using else statements. I also became fluent in accessing and assigning variables inside of animators. A good example of this is the “deathTrigger” script, which checks if another specific NPC’s animator is running its “attack” animation, and, if so, triggers the boolean “attacked” in the animator of the NPC it is attacked to, leading to whatever animations result.

I also put together the lighting system for this scene, which indicates to the user which characters to approach and interact with. Some of the lighting cues are based on dialogue, other lighting cues are also based on animations ending, as needed.

all the spotlights turned on

After that, I did the first, brighter scene, which we started calling “day scene.” Having characters face the user was easy. Having NPCs walk along a certain path continuously, face the user when the user clicked on them, and then return to walking that path when the user did not click on them, that was a bit harder. I figured it out though, writing some pieces of script in the dialogue manager that stored the rotation of the NPC on Start() and then returned the NPC to it when the user clicked on the final page of the dialogue for that interaction. It would also happen when the user left the detection colliders of that NPC. Animation start and stops were also made to correspond. I created separate idle states with trigger transitions that would be triggered by the dialogue manager, or another script if necessary, when the user clicked on the NPC. Hence, when a user clicks, the NPC would stop moving, look at the player, and do an idle animation. When the user clicks all the way through the dialogue or walks away, the NPC returns to its original orientation and starts walking, or simply continues to stand or sit.

animator for pacing, user clicking triggered transition to wave

Once I had these scenes done, I wold send them to Yeji, who added the final sound pieces, and then we worked on developing an aesthetic trigger for switching scenes that functioned the way we wanted it to. The trigger only appears in the first scene after the user speaks to the king. Dialogue guides them to the king, though they can choose to speak to him first if they wish. The trigger only appears in the second scene after the sixth mini-scene has completed, ensuring that the user stays in the night scene until all six mini-scenes are witnessed.

scene transition orb

Generally, I learned many lessons about C# and about the power of scripting in Unity. It is frustrating that the end product is not perfect. NPCs do not always turn properly when clicked on. Sometimes the dialogue panel refuses to appear. When we demoed it for the class, we found a hole in the colliders that we had never encountered before. However, I think the experience can go one of two ways. The user can skip through the dialogue as fast as possible, hit the big shiny orbs and then fumble for the escape key or alt-f4 to quit. Or, the user can slow down, pay attention to the characters and read the dialogue and come to understand an interesting story about what is reflected in the mirrors of the people we encounter everyday, and what we all pretend to forget. Given the timezone differences between our group and our relatively amateur skill sets, I think we did fairly well. So, without further adieu, here’s a run through

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.