Final Project Documentation | Reflections

Description

The user finds themselves inside of a small, rectangular town. Pink buildings encircle a clear blue lake, with no visible exit from the road which follows its edge. In the lake, the reflection of the town can be seen. Nearer to the player, a bridge stretches across the lake, and on the opposite end of the lake, a pier stretches into it. People are scattered about the town, walking around, sitting, or standing, going about their lives. Approaching and speaking with the residents reveals the simple society and economy of the town. Its everyday people are generally distrustful of difference and find their purpose in diving and digging for stones to give to the rulers in exchange for property rights. After talking with the king about stones, a mysterious light appears at the end of the pier.

Upon interacting with the light, most light disappears. A subtle mirroring of building positions suggest this is not the same place, though the same people are scattered about. Faint lights illuminate the road, and a bright spotlight falls on the most cheerful peasant from the first town. Approaching and interacting with the residents, the user finds that they residents no longer recognize the player. Instead, as the user moves from scene to scene, the underlying emotional and political complexities of the town unveil themselves. An overly cheerful peasant lives with depression. A busy wife cheats on her yearning husband. Virtuous rulers exploit the labor of the people for valuables without proper compensation or explanation of their worth. An angry witch sees all but does nothing, for the wizard keeps everyone under illusion. A charismatic and responsible king submits to the dutiful queen’s insistence on greed despite his guilty conscience, as well as the waning powers and reasoned council of his trusted wizard A loyal servant to the queen kills a merchant who refuses to explain how they arrived in the town, driven mad by her suspicions of the rulers.

Process

This project was created in collaboration with Yeji and Ganjina.

At the beginning of the process, we had each voted for different topics, so we brainstormed different compelling narratives and environments for each topic. We narrowed down that list to two. The first fell into the apocalypse theme. The user would find themselves in an apocalyptic environment. When they interacted with the environment, they would be sent into a memory, during which they could choose to alter the timeline and repair the piece of the environment with which they had interacted. The second idea involved expanding upon the city of Valdrada from Invisible Cities. According to Calvino, this city was “built. . . on the shores of a lake, with houses all verandas one above the other,” such that “the traveler . . . sees two cities: one erect above the lake, and the other reflected, upside down” (Calvino 45). Calvino goes on to describe it as “twin cities” that “are not equal,” for “every face and gesture is answered, from the mirror, by a face and gesture inverted, point by point” (45-46). In this way, they “live for each other . . . but there is no love between them” (46). Feedback from the class was inconclusive, so we expanded both ideas a bit more, searching for compelling stories, interactions, and environments which might take place.

At our next meeting, we felt more compelled by the ideas we had created from Valdrada and from reflection as a guiding concept. Two ideas prevailed, one out of my creative process and one out of Yeji’s.

My idea put the user on a tropical-island environment. Guided by the question, “What would it be like to be a child in a city where everyone knows their reflections are a different, mirror world?” the narrative followed the relationship between two people throughout their lives. In one world, they would become best friends. In the reflection, they would become mortal enemies. Moreover, in each scene, the two worlds would interact through the reflections, and the adults would anticipate these interactions and respond to them nonchalantly. For instance, if one of the main characters tackled the other through wall in the enemy world, then that hole in the wall would exist in both worlds. In both worlds, the parents would fix the wall without much comment.

Yeji’s idea emerged from the environment of Amsterdam and revolved around the question, “What would be reflected?” This idea focused more on a commentary on how people tend to hide their true thoughts and feelings in order to get along, yet suppressing these feelings can lead to outbursts when one feels safe or reckless enough to let them out. In this narrative, the story would emerge from the user interacting with the characters to understand some big event that connected everyone in the city. The user would start in a bright city where people glossed over this big event to go about their daily lives and would transition to the second city through an environmental interaction. In the second city, the environment would be darker to match the darker responses to the big event, responses which disrupted the functioning of the city. The user would also encounter these responses as a voyeur rather than a participant, indicating the private, and perhaps truer nature of them.

Yeji’s idea – initial presentation

The clarity of the interaction and story, as well as the feasibility given the remote working environment ultimately pushed us to develop upon Yeji’s idea. In my idea, the primary interaction would be the user’s choice of how to experience the story, and the story would be delivered via scenes which occurred around the user as a ghost. Given the need to deliver two world narratives in parallel, our best solution was an environmentally embedded panel present at the start of each scene, where the user would choose good or evil. They would then be brought into the scene to watch it. In Yeji’a idea, the primary interactions would be with the characters and with a single transition into the reflection. The user’s agency and participation would also be manipulated to convey story rather than held constant to make room for it. Furthermore, my idea would rely upon complex and compelling animations across many different scenes while Yeji’s would still be compelling with simpler animations and fewer scenes. Hence, across the board, Yeji’s idea seemed more conceptually appealing, explorative of the capabilities of VR, and feasible to execute.

One problem still remained: how would we voice the characters? Voiceovers did not make sense without proper voice actors because of the number of characters we were using. After a bit of deliberation, we decided to go with a text-based dialogue system, similar to those of 2D text-based adventure games, like the early Pokémon games.

Pokémon screenshot

With this general outline for the experience, we sketched out the environment as the user would experience it for the paper prototyping session

Feedback from this session solidified an already-developing idea to use the dialogue and interactions with characters to guide the user to the scene-transition interaction. Another option was to start the user right by the pier, but our desire to have the story emerge from the characters suggested a focus on encouraging exploration rather than completion in the first scene. This session also pushed us to move away form having the private scenes occur inside of the houses. We realized that the narrative would still be clear, and we could instead constrain the user’s agency in order to achieve the feeling of privacy. With a relatively clear idea of the world, an outline for a story, we divided up the creative work. Yeji would focus on the story, figuring out the overarching big event and scripting dialogue. Ganjina would start to build the environment based upon our sketches. I would start figuring out how to achieve a functioning dialogue system in virtual reality.

An extensive outline of the design process for the dialogue system is available in my development journal, but I will summarize and expand upon some points here.

To start off, we wanted the system to feel comfortable in VR. The system would also be the object which would carry our story to the user, so it had exist at the intersection of user and environment. As a result, we decided to deliver the system via a HUD panel rather than multiple panels attached to each NPC. Mike Alger’s recommendations on comfortable relative locations for workspace UI were used to determine the dimensions of the panel, its distances from the main camera, its relative angle, and its curvature. Limitations of the assets we were working with meant it could not curve around the optimal spherical shape, but only around a cylindrical shape.

The remainder of the visual design choices occurred on an as-needed basis as more parts of the project were integrated and tested with play testers in a series of demoes that can be found in my development journal.

Visual design choices:

  • The dialogue itself is curved along the same curve as the panel’s edges.
  • A next button and stop button indicate when there is more text to scroll through and when there is less text to scroll through.
  • The text is black when it is bright and white when it is dark to optimize viewing.
  • The user’s reticle is a bright blue that does not blend in with the colors in any of the scenes. It expands into a polygon instead of a circle to match the low-poly aesthetic.
  • The reticle expands when an interaction is possible because this was clearer than a simple color change that might make the reticle disappear into the background.
  • The translucent blue of the panel provides enough contrast without disrupting the visual environment.
  • The panel disappears when deactivated. Initially, it changed form translucent to opaque, but feedback from play testers suggested this was too disruptive the visual experience.

In addition to the visual design of the panel, we also had to develop functions which did not detract from the experience and were doable with the restrictions of the Google Cardboard. Hence, everything requires one-click or click-and-hold. Clicks under specific conditions create specific responses from the environment. These conditions were set to be as intuitive as possible. Finally, these conditions were determined and coded in a master script called dialogueManager. The script is around 500 to 600 lines, so posting it here would not be conducive to understanding it. Many of the functional design choices emerged from a gradual process of integrating the dialogueManger script with the pacing of the dialogue and the animations of each scene.

Functional design choices:

  • When the dialogue panel is active, it must be deactivated before another dialogue can be opened. This is reinforced by changing nothing if the user clicks on another NPC when the dialogue for one NPC is active.
  • The dialogue panel can be reset and deactivated by clicking on anything which is not the panel or another NPC. Hence, the user can close out any dialogue at any point in the dialogue. This adds to their agency as an explorer of the narrative.
  • Clicking on the panel scrolls through the dialogue and ends the dialogue when there is no more of it. The brevity of our dialogue meant that scrolling backwards through the dialogue was unnecessary. Scrolling backwards would not have made sense in the night scene either, when the user is witnessing rather than conversing.
  • In the second scene, the user cannot click through the dialogue until certain animations have played. This occurs during the cheating scene, with its delayed arrival of the peasant husband. The reticle still indicates a potential interaction, however, to indicate that the user will be able to move forward in the dialogue.

Before the environment and dialogue were completely flushed out, I also worked on the mechanics of the interaction between the user and the NPCs, since these determined much of the dialogue system’s functionality. We did not want to have any interactions triggered from an unintuitive distance, so I had to develop in-range detection systems for the NPCs. For reasons described in my development journal, I used a pseudo-cyclindrical, box collider, trigger method to detect when the player was close enough to interact and set the GoogleVR reticle to detect objects from a similar distance.

box colliders to fake a cylinder collider underneath the panel

During this time, I also figured out a simple script to have the NPC look at the player upon interaction. This script became more complicated later, when we had to have two NPCs stop their pacing, turn toward the player, and then return to their pacing routine when the dialogue interaction was completed.

npcLookAtPlayer

Finally, I also figured out the auto-walking mechanics so that the user could move by clicking and holding.

autoWalk

As the dialogue and then the environment were completed, our focus shifted to animating the characters in the space. We used a back-and-forth workflow, and during my part I integrated the dialogue system into the animations.

Dialogue System & Animation Design:

  • Day Scene:
    • NPCs will turn toward the user when in the user interacts with them. This acknowledges the user as a participant and gives them the agency of control over the start of the interaction, a feature of public encounters between people.
    • NPCs return to their original positions or routines when the interaction ends. This maintains a sense of immersion, or that the user is in a place which exists independently of them.
    • The user can continue to interact with NPCs until they desire to move on to the next scene. This encourages exploration and understanding of the different characters and reinforces the user’s role.
  • Night Scene:
    • NPCs do not turn toward the player when the user interacts. This implies that the user is a voyeur, and that the NPCs are not necessarily aware of their presence.
    • Each dialogue-animation set is relative to the scene playing out rather than the NPC activated. This further reinforces the idea that the user is a witness of an otherwise private interaction.
    • The user cannot interact with NPCs whenever they want, but must do so in the prescribed, indicated order. This ensure that the story unfolds progressively, such that the pieces build on each other and organize the bits provided in the day scene.
    • With the exception of the arrival of the peasant husband allowing progression in the dialogue, the dialogue activates the animations of the characters. This was accomplished through edits to dialogueManager.
    • In the cases of the death animations, animation is both a condition and an effect. The deathTrigger script checks if the attacker of the NPC to which it is attached is attacking and then plays the death animation of the attached NPC.
deathTrigger

I also figured out how to do a root motion pacing animation cycle that could be interrupted. This allowed us to add some motion into the otherwise static environment of the day scene, giving more of a feeling of lively town and further distributing the interactions around the space. The pacing script attaches to an object with two box collider triggers. These are placed at the limits of the NPCs pacing area. It detects them and changes their animations on trigger.

pacingLimits

With the animations worked out, the last components were the sound design, the scene change interaction, and the lighting for the second scene.

Since the order of scene encounter matters in the night scene, there needed to be an indication of which scene could be interacted with beyond the reticle changing. Due to the dark environment of the night scene that helped establish the mood of a darker narrative, we decided to use theater-esque lighting to indicate which characters the user should go and interact with. The sceneLightsManager script was developed, and it either checks the animation or the state of the dialogue for a given scene, or both, on an as-needed basis. Hence, as the user completes each scene, they are visually guided to the next one they can interact with.

sceneLightsManager
all spot lights on in night scene

The visual cue for the scene change had to be coherent with the visual cues we used elsewhere. Hence, we used an orb of light as a button. This was visually coherent with our use of light in the second scene and our use of single-click interactions. A similar distance-sensing method to the NPCs was used to ensure the user could only interact once they were within a relatively intuitive distance. Finally, the scene change only appears in certain conditions.

In the day scene, it appears once the user has spoken with the king, and the king has encouraged the user to go acquire stones. When combined with the dialogue of the rogue and the swimming man in the water, it seemed coherent with the story that the king’s dialogue would push the user to try and find stones, and that going to the edge of the pier might allow them to do so.

day scene orb

In the night scene, the orb only appears once the user finishes the last scene. This fits with the intention to require the user to witness all of the truth of the town before being able to return to its daytime. Using the same orb in the same position suggests that it will do the same thing it did before, teleport the user to a new scene.

night scene orb

As for the sound design, these were done mostly by Ganjina and Yeji. I helped integrate them into the animations of the night scene using similar methods from when I ordered the specific pieces of dialogue and animations.

Finally, we had to think about the ending. We were choosing between a fade-to-black or a return to the day scene. In the case of the return to day scene, we would remove the murdered. We also were choosing between maintaining the interactions with the characters or eliminating them, disabling the dialogue system completely. Returning the day scene without allowing interaction seemed to be the strongest indicator that the user’s role had fundamentally changed as a result of what they had witnessed. They now occupied the voice in all of the characters heads which caused the suspicion and conflict, so it made more sense that the town would no longer want to speak to the user.

Reflection

Overall, this piece was successful in delivering the story we wanted. It conveys the commentary on the political and emotional undercurrents of our public interactions. It embeds the story within the characters distributed throughout the environment. The affordances of the interactions with the dialogue system and the scene-change are clear, and their conditions support the delivery and pacing of the story. Furthermore, the experience takes advantage of the unique ability of VR to manipulate the agency and participation of the user, and this manipulation helps convey the meaning of the story.

However, the experience is not perfect. Sometimes the NPCs do not turn toward the user. Sometimes the dialogue panel is triggered by pointing at the pseudo-cylindrical colliders on the ground. There is some asymmetrical pacing in the delivery of the cheating scene due to the dependencies between the animations and dialogue pieces. However, these bugs have been reduced as much as possible and rebutted against by the logic of the system and the story. As a result, they do not detract significantly from the overall delivery of the piece.

Final Project Dev Journal | Reflection

13/4

After a bit of brainstorming, Ganjina, Yeji, and I narrowed our ideas down to two, one idea that intersected apocalypse and Invisible Cities and the other based upon an invisible city. After receiving feedback on our ideas, we decided upon the latter, an experience expanding upon the city of “Valdrada” (Calvino 45-56). Practical limitations of the interactions in Google Cardboard make the former idea much less compelling.

According to Calvino, this city was “built. . . on the shores of a lake, with houses all verandas one above the other,” such that “the traveler . . . sees two cities: one erect above the lake, and the other reflected, upside down” (Calvino 45). Calvino goes on to describe it as “twin cities” that “are not equal,” for “every face and gesture is answered, from the mirror, by a face and gesture inverted, point by point” (45-46). In this way, they “live for each other . . . but there is no love between them” (46).

From these descriptions, we began to think about different stories that were emerge from, and environments which would embody, such a city. For the environment, we are currently leaning toward a tropical island style, though this may change as we work out the story more thoroughly. Furthermore, we may adopt a similar visual style to Kentucky Route Zero. The next step is to flush out the story in its entirety.

So far, we’ve established that the city contains two cities which have been affected by a large scale traumatic event. In the non-reflection city, the characters respond to the stress of such an event in ways which are ultimately positive, whereas in the world below, their responses reveal the darker aspects of humanity. In some way, these two worlds will interact visually, and perhaps causally.

As for the specific stories to which users will bear witness, I’ve so far thought of watching the relationship of two people. In the non-reflection city, users watch key moments of their lives which bring them to be the closest of friends, and in the reflection city, users watch them become enemies who kill eventually kill each other. The user might be able to influence the story by choosing which world is the physical one and which is the reflection.

28/4

Much has changed in the last two weeks. To start, our environment and story have shifted significantly. Shortly after finishing the last post, Yeji suggested a compelling environment inspired by Amsterdam, coupled with a story that revolves around the user exploring the meaning of reflection. We decided to develop and begin building that idea.

Pre-development Story Description

Yeji’s illustrations of the idea

There will be two cities, one we’ve been calling the “initial” city and the other we’ve been calling the “reflection” city. The user starts in the initial city, the houses are colorful and the sky is beautiful as the street lights start to come on and light up the city. People are walking around, and the atmosphere is cheerful. A pier stretches out onto the lake. The user can walk around the lake and talk with the NPCs wandering about their daily lives. Over time, a light will appear in the water at the end of the pier, drawing the user to it. When the user reaches the light, they find they can click on it, and suddenly the world shifts.

The user finds themselves in a city that is the reflection of the initial city. The building positions are flipped. The sky is dark. The houses are grey. No one is outside. A few houses have color, and a similar visual cue to the water, suggesting the user may interact with them. As the user approaches the homes, they can peer into the window to see an interaction reflecting a major event which has negatively affected the residents, something which those in the initial city spend their lives ignoring.

Development So-Far

Paper Prototyping

After establishing a general outline of the experience, we sketched paper prototypes for the play-testing session.

We received several major feedback points:

  • How will we guide the user to the pier? Dimming the sky to increase visual salience of the cue may help. Using NPC interactions and dialogue would be even better. Starting the user nearer to the pier might also help.
  • How are we moving? We could do automatic moving, like in TrailVR, or we could do GoogleMaps style point-and-teleport.
  • How much animation are we doing? Trying lots of animation means higher potential for making good use of VR storytelling space. It also means more overhead and more work.

Once we received feedback on our paper prototypes, we decided to pull assets before dividing up the work of building and finalizing the story.

Story

We shared a few ideas at the end of a meeting, and then Yeji took the lead on the story until we could meet again to finalize it. As of now, it is still being flushed out. The “big event” idea mentioned earlier has been located in the idea of fear of the unknown outside world. The city, as depicted, will be entirely enclosed around the lake. The reason for this peculiar design will likely guide the tension of the story. The story is also emerging from an asset pack of fantasy characters that we picked up.

Environment

Ganjina started building the environment based on the drawings, and a few thing have changed as a result of the assets we are using. First, the “lake” is square. Second, the pier is a bridge, though this may change. Otherwise, as of now the first draft of the initial city is done.

Dialogue System

For the dialogue, we considered doing voice overs but ultimately thought that doing a text-based system would be more interesting to try and be much more scalable. I was given the task of starting to build the dialogue system.

The first step was to build a panel for all of the UI to go on. I knew that I wanted to build a panel based on the optimal ergonomic VR workspace areas defined by Mike Alger in 2015. For that, I would need a structural framework which could take real-world inputs. Unfortunately, my skills with Unity are not such that I could satisfy my own need for accuracy. Luckily, I’ve been working in Fusion 360 a ton for Machine Lab, and I was able to build my own panel.

There are two versions. The first is a spherically-curved panel. I estimated 2/3 arm reach distance as 2/3 of the FOV of Google Cardboard, roughly 60 degrees. I then took the 15 to 50 degrees below eye level, and intersected the sketches with a body.

spherical, curve panel

However, after I implemented this, I realized that the asset pack I was using, Easy Curved UI Menu for VR, could only curve around a cylinder, that is, around one axis. Matching that up against the optimal spherical curvature was sub-par, so I built another one that only curved around one axis.

cylindrical, curve panel

After working with the Easy asset, I realized it would never do what I wanted. I could not get the curved menu to rotate at 32.5 degrees (the mid-curve of the optimal angles). The asset pack essentially takes a flat panel, cuts it up into a bunch of rectangles and looks at them through a rendering camera, generating a curved plane in the process. Unfortunately, every time the curved menu generates on play, it resets its rotation.

easy UI refusing to rotate to match the frame

I did some research and found a method with UI Extensions. That worked great. I matched up the bezier curves, and moved on.

fitting the UI panel with the model panel

From there I focused on making the panel translucent when no dialogue was present. To do this, I had to drop the background image and kept the 3D model I had made in the environment instead. I also kept receiving no errors and no functionality whenever I implemented the GoogleVR reticle system, so I built a reticle and raycasting scripts based on class examples. By the writing of this post, this is where I am at in the overall system:

The panel is translucent when not in use, but present so it does not jar the user when it appears. The reticle changes colors when it hovers over an object makred “NPC,” and clicking on an NPC changes the panel to more opaque and brings about the dialogue related to that NPC. When the user looks clicks away from the NPC and clicks, the dialogue disappears, and the panel becomes more translucent again.

Dialogue System To-do

  1. script user’s ability to scroll through several panels of dialogue
  2. test with multiple dialogues
  3. test with multiple NPCs
  4. script automatic movement
  5. script movement restriction upon dialogue initiation
  6. script movement freeing upon dialogue ending
  7. bring it into the larger build
12/5

The Play Tests

Shortly after my last post, Yeji had extensive dialogue written, and we all met to finalize the story. Ganjina continued to push through on the environment, finishing up both the initial and reflection cities. With the dialogue system I created a series of demos.

The first demo was not much more than the system, a way of explaining how it worked to the user for feedback during our very first play testing system.

We received feedback to change the reticle so it was more noticeable, to speed up the walk speed, and to decrease or eliminate the delay between clicking and the starting of walking. During the play test, we also explored how to drive our story forward, several ways were proposed. We could used the dialogue by restricting or obscuring actions until the player made progress. We could also use the time of day to push the user to move on from the daylight scene.

Using this feedback, as well as dialogue developed my Yeji, I built a second tutorial which demoed the way we decided to organize the characters and interactions in order to drive the story forward. At this point, Yeji jumped on working on animating the characters while Ganjina put the finishing touches on the environment.

Using real characters and real dialogue created some design problems. While there were many small ones, one of the larger ones was how to trigger the indicator that the NPC could be interacted with on the front and back end. Using the GVR reticle pointer system, I set the distance to 2.5. I then set up a collision detection system with spheres around each NPC. Done, right? No. The player would have to scroll through the dialogue when inside the sphere collider by clicking on the dialogue panel. However, because they were still inside the sphere, the raycast would detect a hit on the NPC, not the dialogue panel.

pseudo-cylinder colliders below dialogue panel but tall enough to detect player

I put the sphere colliders on children of the NPC game object and set those children to the layer “Ignore Raycast.” Nope, same problem, most likely because the box collider on the body still had to be clickable to activate the NPC interaction, so I could not put it to “Ignore Raycast.” My final solution was a compound collider made up of four box colliders set at 0, 22.5, 45, and 67.5 rotations of the y-axis around each character. The created a pseudo-cylinder that rose high enough to detect the player controller entering them but low enough that they would not encounter the dialogue panel. This worked fairly well, but I did not figure out that “Ignore Racast” existed until after the second demo was ready, so the dialogue panel was very buggy in this demo. Similar problems with the scene-switch button meant play testers had to run through multiple times to encounter all the NPCs.

An additional problems at this time was a lack of order in the night scene. Users could just go up to anyone. The GVR reticle was also not accurately indicating when the user could activate a component.

Because of these problems, we received limited feedback. There needed to be an indicator that there was more dialogue when there was more dialogue and not more dialogue when there was not. For this, I added next and stop buttons to appear at the appropriate moments on the panel (see video at end) One play tester also suggested bringing the dialogue box up and to the side rather than down low. However, the positioning of the panel should be based on experience in VR rather than on a computer. While a panel low might be annoying on a computer, looking down at something in your hands or near chest height to read information about something in front of you is a fairly common interaction. It was something I decided to try at the end if we had time, and we did not. One piece of feedback that was implemented after this session was the change from a panel that switches between opaque and translucent and one that switches between active and inactive, minimizing its on-screen time.

Final Development Stages

At this point, Ganjina had finished the environment and started finding sound files and testing ways to switch from one scene to the next. Yeji would orient the characters in the completed environment and determine their interactions. These scenes would be sent to me, and I would integrate the dialogue system into the animations. This was the most difficult part. On the back end, my limited experience with C# made ensuring that everything would trigger exactly how we wanted, when we wanted, difficult.

I could write a small book on all the little things that went wrong during this. What was probably the driver of many of them was the fact that the two scenes had reflected some things and not others, and one of those things was the user’s agency. In one, the user commands the NPCs; they attend to the user upon the user’s request and guide the user to where they want to go. In the other, the user witnesses the NPCs: they ignore the user and let the system explain where they have to go.

I did the second, darker scene, which we started calling “night scene,” first. In this one, every scene’s dialogue had to be broken up, with individual pages triggering and being triggered by different animations and sounds. Also, it took a while to figure out that if I wanted the user to adhere to a certain order with a plague of null references, all I had to do was avoid using else statements. I also became fluent in accessing and assigning variables inside of animators. A good example of this is the “deathTrigger” script, which checks if another specific NPC’s animator is running its “attack” animation, and, if so, triggers the boolean “attacked” in the animator of the NPC it is attacked to, leading to whatever animations result.

I also put together the lighting system for this scene, which indicates to the user which characters to approach and interact with. Some of the lighting cues are based on dialogue, other lighting cues are also based on animations ending, as needed.

all the spotlights turned on

After that, I did the first, brighter scene, which we started calling “day scene.” Having characters face the user was easy. Having NPCs walk along a certain path continuously, face the user when the user clicked on them, and then return to walking that path when the user did not click on them, that was a bit harder. I figured it out though, writing some pieces of script in the dialogue manager that stored the rotation of the NPC on Start() and then returned the NPC to it when the user clicked on the final page of the dialogue for that interaction. It would also happen when the user left the detection colliders of that NPC. Animation start and stops were also made to correspond. I created separate idle states with trigger transitions that would be triggered by the dialogue manager, or another script if necessary, when the user clicked on the NPC. Hence, when a user clicks, the NPC would stop moving, look at the player, and do an idle animation. When the user clicks all the way through the dialogue or walks away, the NPC returns to its original orientation and starts walking, or simply continues to stand or sit.

animator for pacing, user clicking triggered transition to wave

Once I had these scenes done, I wold send them to Yeji, who added the final sound pieces, and then we worked on developing an aesthetic trigger for switching scenes that functioned the way we wanted it to. The trigger only appears in the first scene after the user speaks to the king. Dialogue guides them to the king, though they can choose to speak to him first if they wish. The trigger only appears in the second scene after the sixth mini-scene has completed, ensuring that the user stays in the night scene until all six mini-scenes are witnessed.

scene transition orb

Generally, I learned many lessons about C# and about the power of scripting in Unity. It is frustrating that the end product is not perfect. NPCs do not always turn properly when clicked on. Sometimes the dialogue panel refuses to appear. When we demoed it for the class, we found a hole in the colliders that we had never encountered before. However, I think the experience can go one of two ways. The user can skip through the dialogue as fast as possible, hit the big shiny orbs and then fumble for the escape key or alt-f4 to quit. Or, the user can slow down, pay attention to the characters and read the dialogue and come to understand an interesting story about what is reflected in the mirrors of the people we encounter everyday, and what we all pretend to forget. Given the timezone differences between our group and our relatively amateur skill sets, I think we did fairly well. So, without further adieu, here’s a run through

Calvino | Melania

The city of Melania, from “Cities & the Dead 1,” illustrated a feeling of endless consistency that has featured in two of the places I’ve lived. As Calvino describes, “at Melania, every time you enter the square, you find yourself caught in a dialogue,” yet if “you return to Melania after years . . . you find the same dialogue still going on” (72). This persists despite the fact that “Melania’s population renews itself,” and that “the dialogue changes, even if the lives of Melania’s inhabitants are too short for them to realize it” (73).

In the city in which I grew up, Oklahoma City, the dialogues around me were often the same. Kids are causing trouble. The weather makes no sense. Healthcare sucks. Family is everything. Work hard. The government doesn’t care about you. Politicians are liars. Go to Church on Sundays. School is boring. We’re due for another drought. Moore will have to rebuild, again. In high school I started working and gained more and more independence from my family, allowing me to experience more and more of the city and wider metropolitan area (comprising about 20 surrounding towns and suburbs). No matter where I went or who I talked to, these things persisted.

After my junior year, I moved to finish school at a boarding school in the mountains of New Mexico. On the weekends, I would semi-regularly go into the nearby town of Las Vegas, New Mexico. Over time, I started to hear the town’s own dialogues. The government doesn’t care. There’s no more jobs. So-and-so is leaving. That shop is closing. These people are visiting. Walmart sold out of this today.As I went back and forth between OKC and Las Vegas, I came to notice the contrast more and more and started to understand that these places had a certain temporal consistency, not a refusal to change but a refusal to acknowledge the inevitable change as it happened. 

In the last two years, I’ve only spent a little over two months in Oklahoma City, and I’ve started to notice that renewal Calvino describes. The barista I used to know by name now works as host and the host now works as barista. The city has built a new park where an abandoned office building once stood, and built an office building on what used to be part of a park. The daughter of the family who owns the Moroccan restaurant down the street runs the restaurant she used to serve in, and her son pours our tea. The drug dealer who worked the corner near my friend’s house now owns the house across the street, and his dealers work the corner.

Of course, all cities have these tropes, these things which are talked about more often than others, and different parts of cities speak of different things. None of this is meant to condemn this tendency to refuse to acknowledge change, but rather to reflect on its inevitability, and the way some places cope with its challenges. In some places, even when coronavirus arrives, people still worry more about the next tornado season.

Project 2 Documentation | Fire Planet

Description

The user stands on a barren planet, fires burning all around, with mechanical sprinklers keeping most of the flames at bay. Behind the user towers a city in the distance, surrounded by a dome. A voiceover calls upon the user to use their powers to fight the fires and repair a sprinkler that has broken. Using their abilities as the designated protector, the user can fire projectiles from their hands to extinguish the fires and clear a path to the sprinkler. Reaching it, the sprinkler reactivates, and the voiceover congratulates them. In short, the user firefights on a different planet.

Process

We started thinking about interesting, observable, everyday actions. Building upon our ideas from the in-class activity and considering the controllers of the HTC Vive, we landed on the action of waving a fan. Simultaneously and iteratively, we considered what situations waving a fan do occur and could occur inside of an alternate reality. This led us to think about firefighting with a fan. With this action and character context, we built a narrative of a civilization under constant threat from the fires of the planet they live on. 

We imagined what the life of a firefighter in such a civilization would be like. A civilization under constant threat of fires would establish more permanent and secure defenses against the ever-present threat of fire. Hence, the city of the civilization is covered by a dome, and the dome is surrounded by large mechanical sprinklers. The role of a firefighter as a first-responder shifts slightly in this reality, then, as they are tasked with repairing and maintaining the defenses rather than fighting the fires themselves. This translated into the user repairing a broken sprinkler, using a fan to fight the fires which had encroached inside the boundary in the meantime.

storyboard

Due to the change from a VR system to a computer system, we had to reconsider the interaction of the fan. While an HTC Vive controller mimics the handle of a fan and can be waved independently of the user turning their head, waving a mouse around in first-person would force the user’s head and aim to move erratically or require an unconventional restricting of movement. Because of these affordance problems, we changed to an earlier idea: throwing water balloons. The FPS-reminiscent setup allows a clearer connection to what user’s already expect from a mouse-and-keyboard game. Upon in-class feedback, we abstracted the water balloons into spells.

With the world, character, and interaction worked out, we moved to divide up the work and start building. Steven worked on the scene design and hand animation. Mari worked on the projectiles. I worked on the particle system interaction between the extinguishing projectile and the fires.

To create the effect of an endless fire, the fires had to burn continuously and reignite if extinguished. The reignition had to be delayed in order for a path to be cleared by the user. After exploring the asset store for different fire systems, I started to use the Fire Propagation System. I followed the use instructions, learned exactly how all of the parts worked, and realized it would not work. The fire system was too realistic. The fires in this system burned based upon available fuel and different material and climate properties. After the calculated “HP” of a material is reached by the ignition system, a fire starts, and then burns until the materials fuel value is reached. Unfortunately, we did not want a fire that would always die.

I read through the sections of the Unity reference on particle systems and collision and then found tutorials on how to script delays of actions and to disable and reenable in game objects. With this research, I was able to write two scripts. One script detects collisions between the extinguishing particle system and the fire particle system and disables the fire system upon collision. The other script detects when fires have been disabled and then enables each particle system after a given delay. With this much simpler method, the affordances of the extinguishing spells and the response of the environment is much clearer.

The scripts were combined with Mari and Steven’s work. As final touches, a voiceover was added to clarify the narrative and contextualize the user’s role in the experience, and a cylindrical indicator was also used to guide the user to the disabled sprinkler.

Scripts

Script Demo Images

Reflection

In the end, this experience successfully establishes an everyday activity in an alternate world. It approaches this challenge by considering a specific role, firefighter, and restructures that role in an alternate reality, the fire planet. The voiceover, hand animation, particle path, and fire system all clearly demonstrate or reinforce the different affordances of the experience. While the world and the character matched what we envisioned early on, the interaction itself lacks much of the more seamless interaction afforded by a VR system and its controllers. While the materiality and contextualization of the interaction is unique, we did not have the opportunity to explore the potentialities of a new medium, the way creating a fanning interaction in VR would have.

Agency

The main action designed for the user in Fire Planet is extinguishing fires. Its meaningful quality relies upon the instinctive perception of uncontained fire as a threat, the intuitive perception of a city as something to be defended, and the provided (via voiceover) impetus of the user’s role as a protector. The planet’s features and orientation of the user help establish these perceptions. The uncontrolled fires contrast against the fires controlled by the functioning sprinklers. The user starts close to and facing the uncontrolled fires, with the city behind them. Placing them between danger and the endangered, and having them face the danger rather than face away from it, suggests the role of a user as a guardian about to defend rather than a victim about to flee. The voiceover then makes all of this explicit through narrative, establishing a motive and a specific task, repairing the sprinkler, that will allow them to fulfill their role completely. All of these elements combine to drive the user to extinguish the fires.

Project 2 Dev Journal | Fire Planet

For this project, I’m working with Mari and Steven. After a few in-class discussions, we met again for a brainstorming session. We started with coming up with different environments, then moved to thinking of different everyday actions. Throughout the process, we continuously returned to the question, “Why VR?” We considered what VR could do, and what everyday means. As we navigated the balance between compelling and feasible, we realized our ideas revolved around the themes of playing with known roles, perception of scale, and customization.

brainstorm mind map

As usual, guiding our thinking with even this messy mind map helped us find our way to a good idea. In the end, we landed on what it would be like to be a firefighter on a fiery planet. Compelled by a nice balance between feasibility and compelling, we developed a few versions of the narrative and started to storyboard.

storyboard 1
storyboard 2

In this experience, the user finds themselves on a fiery planet. In front of them, a tall fire burns. Sprinklers in the near foreground keep them at bay, but a small asteroid has landed on one of the sprinklers, pushing it into the ground. Because of this, the fire is slowly creeping through the gap towards the user. Behind them, a dome extends to the left and the right horizons and soars into the sky. Beyond the barrier, a city rises up. A voice tells the user, who holds a fan in one hand, to try and reach the broken sprinklers and repair them. The user has to wave the fan to push the flames back, and, when they reach the sprinkler, reach down, remove the rock, and pull the sprinkler out.

So far, the idea is fairly well developed. However, we still have to work out the specifics of the interactions and the cues which direct the user’s attention.

March 14 | Update

After the assignment changed from VR to computer-based, we decided to focus on the interaction of the user putting out the fire with a tool in their hand. Because of the medium change, we changed this tool from a fan to a sort of water spell. From there, we split up the components of the interaction and started to work. Steven put together the character and the first-person animation of the arm throwing the balloons. Mari developed a raycast system that would move the spell from the users hand to where they were aiming in the distance. I developed the water-fire interaction.

For my tasks, I ended up writing two simple scripts. We wanted the fire to burn continuously but to disappear when the water touched it. We also wanted it to reignite after a time so the user would constantly have to be putting the fires out. One script allows a particle system to disable the fire particle system upon collision. The other script re-enables the disabled fire after a delay. This latter script manages all the fires in the scene.


response, a medium

Kwastek quotes Jochen Schulte-Sasse to describe “medium . . . as a ‘bearer of information . . . that fundamentally shapes it. . . so as to give form to human access to reality’” (167). Krueger asserts response as a “medium comprised of sensing, display and control systems” (430). To understand response as a medium, then, is to understand it as that which simultaneously shapes and presents the information contained within systems. As elaborated by both authors, these systems contain technical and aesthetic components which, as Kwastrek repeats, form some sort of gestalt via interaction with a user. Kwastek ultimately veers away from calling interactive media arts a medium, favoring the analogy of apparatus. Response, then, might serve as the medium of interactive media, as that which is manipulated by the artist via a system which requires interaction from a user.

Project 1 Documentation | Finding Solitude

This experience emerged from the idea of solitude. Considering both
Gabriel García Márquez’ One Hundred Years of Solitude and the way spaces between places can have placeness led me to understand solitude as a place marked by chosen, intentional loneliness. Such placed space becomes apparent in games like The Long Dark, where the vastness and the spectral presence of humans in their artifacts turns what would otherwise be in-between space into a place of “the forest” or “the ghost town”  or “the wild.

screenshot from The Long Dark

Because of the practical limitations of having to rely upon visual cues for immersion, I focused on filling the space with objects that would suggest a long-term occupant whose grasp on physical reality has been eroded by solitude. In response to this erosion, the occupant tries to remind themselves of reality by manipulating it physically, but their mind starts to slip. Finding Solitude situates the user inside of a stone cave, with a campfire, woodpile, sleeping mat and various stone and wooden animal sculptures, along with the tools to make them, scattered about the cave. The smoke rises from the campfire and spreads across the ceiling. A waterfall rises across the entrance to the cave, flowing backward and obscuring a view of a frozen lake and several mountains.


Initially I thought of creating an empty cave, but after some research, I realized that An empty cave may feel lonely, but the time and effort implied by filling it with artifacts of activity-the statues, the figurines, the tools, the wood stack, the campfire, and the sleeping mat-would  turn that loneliness into solitude. Someone, the user, has been there, expending their time and effort. The repetitiveness of the statues and the size of the wood stack imply the obsessiveness of this expenditure, suggesting the futility of effort which yields nothing for anyone outside of the cave, anyone aside from the user. While these serve as indicators of the way a mind in solitude starts to shift, the upside-down waterfall confirms that the user’s experiential reality of the world has collided with their physical reality. They are hallucinating. Logically, if the user entered the cave, they should be able to exit, but the waterfall covers there only exit and creates cognitive friction with such causal inferences. The user is made to believe that they cannot leave when, in fact, they can. In this way, all of these elements combine to create a sense of being present inside a mind filled with solitude, a reality which is neither the physical reality nor the experiential mind, but somewhere in between.

The initial view is almost banal, with a campfire, a wood stack, and a cave entrance covered by a waterfall. I wanted something to be slightly off about this view, so I inverted the flow of the waterfall. The placement of the campfire allows for the smoke cue discussed later, but it also makes sense practically. The fire’s smoke should mostly exit the cave and the light should ward off dangerous creatures. Hence, its placement implies the foresight of one who has been in the wilderness long enough to understand these functions, as well as the nature of the wilderness outside.

The initial view is almost banal, with a campfire, a wood stack, and a cave entrance covered by a waterfall. I wanted something to be slightly off about this view, so I inverted the flow of the waterfall. The placement of the campfire allows for the smoke cue discussed later, but it also makes sense practically. The fire’s smoke should mostly exit the cave and the light should ward off dangerous creatures. Hence, its placement implies the foresight of one who has been in the wilderness long enough to understand these functions, as well as the nature of the wilderness outside.

If the user turns left, the wood stack comes into view with an axe leaned up against it. The stack has an unreasonably large amount of wood, but does not necessarily suggest anything strange aside from the amount of time  the individual has been there. It also implies, along with the unevenness of the walls and ceiling, that the cave exists within a larger natural context with plenty of available firewood, such as a hilly or mountainous forest. When the user turns completely to the left, a wolf head statue comes into view, and they are drawn to the back wall.

Similarly, if the user turns to the right, they start to see the statues, which guide them to turn all the way around to see the wall of various animal carvings 

Tools lying on the floor near the statues bring the user’s view down, a movement that will guide them to the figurines and the sleeping mat if they turn left initially or toward the back wall if they turn right. The sleeping mat also implies the number of people who live in the cave: one.

A final, subtler cue to look at the statues  comes from the smoke, which radiates toward the back of the cave. Both the waterfall and the smoke are constituted by large particles rather than lines. This both fit better with the style and helped make the two feel more like objects in the space by exaggerating their movement to compensate for the lack of sound and taste that often indicate the presence of such translucent substances.

Overall, I felt this experience did succeed in implying solitude, but there are many improvements that could be made. In terms immersion, sounds of the waterfall and the campfire would help. In the initial design, I also wanted the statues to growl when the user was not looking at them to add a bit more tension to the scene. The smoothness of the floor and the way the smoke blocks just disappear like bubbles popping also brings users out of the experience. Finally, using animal models designed for low-poly rather than adding a stone texture to models designed for life-like textures would improve the consistency of the aesthetic. 

Overall, I felt this experience did succeed in implying solitude, but there are many improvements that could be made. In terms immersion, sounds of the waterfall and the campfire would help. In the initial design, I also wanted the statues to growl when the user was not looking at them to add a bit more tension to the scene. The smoothness of the floor and the way the smoke blocks just disappear like bubbles popping also brings users out of the experience. Finally, using animal models designed for low-poly rather than adding a stone texture to models designed for life-like textures would improve the consistency of the aesthetic. 

Aside from these improvements, however, I am satisfied with the work. Rarely does an idea manifest as I initially intended it to because I have to bring out a concept without the ability to construct the world it exists in. With this project, however, the result was nearly the same as the initial inspiration, with the only limit being my skills with the technology. While that limitation did force me to cut many of the initial interactions I had planned, it also humbled me, making me appreciate the consideration and thoughtfulness that must go into creating a static environment. In particular, the medium allowed me to create exactly what I wanted, so I had to think much harder about the why, something that is often lost or modified when I fabricate tangible projects.

Project 1 Dev Journal| Finding Solitude

This project started with a simple word-association brainstorm. I tried to think of as many potential identities for scenes as I could, focusing on both environmental and affective or atmospheric characteristics. I then moved into combining different identities and coming up with simple scenes for each of them, ending up with about five different scenes. However, the scenes I was thinking of and sketching out were narrative or interaction driven. It  was hard to think about a static scene with the sole interaction of looking.

A whiteboard with the word association exercise and small diagrams of potential scenes.
initial brainstorm whiteboard


Stuck for a moment, I started thinking about Kentucky Route Zero and the ideas of scenography that Kemenczy talked about as I pulled up images and environments from games that I’ve played or heard about because of their affective qualities. From there I tried to draw a conceptual line through the affective qualities from those games and the scenes that had started to take shape from the brainstorm and landed on solitude. At this point, I started to compile images from the games, abstract and conceptual art regarding solitude, and textures which evoked solitude into a Pinterest board.

Drawing on Gabriel Garcia Marquez’ 100 Years of Solitude, I tried to understand how these environments depicted solitude as opposed to being alone, that is, to visually imply the choice to be alone. Games like Red Dead Redemption, Alan Wake, and The Long Dark do so primarily through playing with space and place.. While the characters’ outfits can imply a decision to isolate out of necessity, so too does the simple empty expanses, the shear amount of space that these games cover. After enough time spent wandering in any of these games, those in between spaces start to feel like a place themselves, a place of solitude, where the world reflects the underlying reality of the characters’ lives. Without the ability to move through a large expanse for several hours, however, I turned back to Marquez’ magical realism techniques, where physical reality and experiential reality have no boundary. With this in mind, I tried to think of the home of someone seeking solitude, and the scene took shape.

A pencil drawing of the first sketch of the scene.
first pen sketch of the scene

The scene takes place in a cave. A waterfall flowing upwards covers the entrance of the cave. On the floor in front of the user are a sleeping roll and burning campfire, its smoke curling up and covering the ceiling. The back section of the cave has been sculpted into dozens of animal heads, with the tools lying on the ground with the rubble from the work. Stacked against one side of the cave is wood, as much as can be comfortably held, with an axe leaning against the stack. Strips of dried meat are fitted into the cracks of the wood pile. On the opposite wall is a tally, implying the number of days the user has spent in the cave. On the floor are several wooden figurines, practice pieces for the animals on the back of the cave. On the other side of the waterfall, obscured from the user’s view is a frozen, moonlit lake with dozens of silhouettes, their glowing eyes staring into the cave.

6 different 360 degree story boards.
story boards of the scene from multiple user positions


There should be several interactions, but after the feedback session, I have decided to play with the density of the waterfall and the smoke so that the silhouette’s and the glowing eyes are slightly visible but obscured.

With the concept and structure figured out, I pulled enough assets form the unity store and now have to begin making the scene.

Murray | Ch. 3

Murray asserts that digital environments are spatial, but her definition of that property more appropriately fits to an assertion that digital environments simulate spatiality. In her own words, these “environments are characterized by their power to represent navigable space” (96). That is to say that the “computer screen is displaying a story that is also a place,” which leads to the “challenge” of “invent[ing] an increasingly graceful choreography of navigation to lure the interactor through ever more expressive narrative landscapes” (100). In the sense that Murray describes, then, VR can represent this property at its most extreme. As graphics and motion tracking and field of view improve over time, there will be a thinner veil between physical reality and a virtual experience. However, to appreciate this potential fully, Murray’s definition has to be expanded. 

Digital environments do not only simulate spatiality; they exist spatially. Computers and the devices in which they are embedded take up space. Digital environments  crisscrosses the world in cables and satellites. These environments draw us into overly relaxed postures, shifting the way we take up space as well. Of course, digital environments do simulate spatiality, but this is trick, one we are acutely aware of. We can willingly suspend our disbelief and pretend digital environments extends beyond our screens. We can overlay useful digital environments onto our real world, the way we do with Google maps. We can also leave these simulations. In VR, these choices still exist, except there is much less, if any, need to suspend disbelief because the user is in the digital environment. Hence, VR takes the spatial principle from being a representation or a display to a truth.