Final Project Dev Journal | Reflection

13/4

After a bit of brainstorming, Ganjina, Yeji, and I narrowed our ideas down to two, one idea that intersected apocalypse and Invisible Cities and the other based upon an invisible city. After receiving feedback on our ideas, we decided upon the latter, an experience expanding upon the city of “Valdrada” (Calvino 45-56). Practical limitations of the interactions in Google Cardboard make the former idea much less compelling.

According to Calvino, this city was “built. . . on the shores of a lake, with houses all verandas one above the other,” such that “the traveler . . . sees two cities: one erect above the lake, and the other reflected, upside down” (Calvino 45). Calvino goes on to describe it as “twin cities” that “are not equal,” for “every face and gesture is answered, from the mirror, by a face and gesture inverted, point by point” (45-46). In this way, they “live for each other . . . but there is no love between them” (46).

From these descriptions, we began to think about different stories that were emerge from, and environments which would embody, such a city. For the environment, we are currently leaning toward a tropical island style, though this may change as we work out the story more thoroughly. Furthermore, we may adopt a similar visual style to Kentucky Route Zero. The next step is to flush out the story in its entirety.

So far, we’ve established that the city contains two cities which have been affected by a large scale traumatic event. In the non-reflection city, the characters respond to the stress of such an event in ways which are ultimately positive, whereas in the world below, their responses reveal the darker aspects of humanity. In some way, these two worlds will interact visually, and perhaps causally.

As for the specific stories to which users will bear witness, I’ve so far thought of watching the relationship of two people. In the non-reflection city, users watch key moments of their lives which bring them to be the closest of friends, and in the reflection city, users watch them become enemies who kill eventually kill each other. The user might be able to influence the story by choosing which world is the physical one and which is the reflection.

28/4

Much has changed in the last two weeks. To start, our environment and story have shifted significantly. Shortly after finishing the last post, Yeji suggested a compelling environment inspired by Amsterdam, coupled with a story that revolves around the user exploring the meaning of reflection. We decided to develop and begin building that idea.

Pre-development Story Description

Yeji’s illustrations of the idea

There will be two cities, one we’ve been calling the “initial” city and the other we’ve been calling the “reflection” city. The user starts in the initial city, the houses are colorful and the sky is beautiful as the street lights start to come on and light up the city. People are walking around, and the atmosphere is cheerful. A pier stretches out onto the lake. The user can walk around the lake and talk with the NPCs wandering about their daily lives. Over time, a light will appear in the water at the end of the pier, drawing the user to it. When the user reaches the light, they find they can click on it, and suddenly the world shifts.

The user finds themselves in a city that is the reflection of the initial city. The building positions are flipped. The sky is dark. The houses are grey. No one is outside. A few houses have color, and a similar visual cue to the water, suggesting the user may interact with them. As the user approaches the homes, they can peer into the window to see an interaction reflecting a major event which has negatively affected the residents, something which those in the initial city spend their lives ignoring.

Development So-Far

Paper Prototyping

After establishing a general outline of the experience, we sketched paper prototypes for the play-testing session.

We received several major feedback points:

  • How will we guide the user to the pier? Dimming the sky to increase visual salience of the cue may help. Using NPC interactions and dialogue would be even better. Starting the user nearer to the pier might also help.
  • How are we moving? We could do automatic moving, like in TrailVR, or we could do GoogleMaps style point-and-teleport.
  • How much animation are we doing? Trying lots of animation means higher potential for making good use of VR storytelling space. It also means more overhead and more work.

Once we received feedback on our paper prototypes, we decided to pull assets before dividing up the work of building and finalizing the story.

Story

We shared a few ideas at the end of a meeting, and then Yeji took the lead on the story until we could meet again to finalize it. As of now, it is still being flushed out. The “big event” idea mentioned earlier has been located in the idea of fear of the unknown outside world. The city, as depicted, will be entirely enclosed around the lake. The reason for this peculiar design will likely guide the tension of the story. The story is also emerging from an asset pack of fantasy characters that we picked up.

Environment

Ganjina started building the environment based on the drawings, and a few thing have changed as a result of the assets we are using. First, the “lake” is square. Second, the pier is a bridge, though this may change. Otherwise, as of now the first draft of the initial city is done.

Dialogue System

For the dialogue, we considered doing voice overs but ultimately thought that doing a text-based system would be more interesting to try and be much more scalable. I was given the task of starting to build the dialogue system.

The first step was to build a panel for all of the UI to go on. I knew that I wanted to build a panel based on the optimal ergonomic VR workspace areas defined by Mike Alger in 2015. For that, I would need a structural framework which could take real-world inputs. Unfortunately, my skills with Unity are not such that I could satisfy my own need for accuracy. Luckily, I’ve been working in Fusion 360 a ton for Machine Lab, and I was able to build my own panel.

There are two versions. The first is a spherically-curved panel. I estimated 2/3 arm reach distance as 2/3 of the FOV of Google Cardboard, roughly 60 degrees. I then took the 15 to 50 degrees below eye level, and intersected the sketches with a body.

spherical, curve panel

However, after I implemented this, I realized that the asset pack I was using, Easy Curved UI Menu for VR, could only curve around a cylinder, that is, around one axis. Matching that up against the optimal spherical curvature was sub-par, so I built another one that only curved around one axis.

cylindrical, curve panel

After working with the Easy asset, I realized it would never do what I wanted. I could not get the curved menu to rotate at 32.5 degrees (the mid-curve of the optimal angles). The asset pack essentially takes a flat panel, cuts it up into a bunch of rectangles and looks at them through a rendering camera, generating a curved plane in the process. Unfortunately, every time the curved menu generates on play, it resets its rotation.

easy UI refusing to rotate to match the frame

I did some research and found a method with UI Extensions. That worked great. I matched up the bezier curves, and moved on.

fitting the UI panel with the model panel

From there I focused on making the panel translucent when no dialogue was present. To do this, I had to drop the background image and kept the 3D model I had made in the environment instead. I also kept receiving no errors and no functionality whenever I implemented the GoogleVR reticle system, so I built a reticle and raycasting scripts based on class examples. By the writing of this post, this is where I am at in the overall system:

The panel is translucent when not in use, but present so it does not jar the user when it appears. The reticle changes colors when it hovers over an object makred “NPC,” and clicking on an NPC changes the panel to more opaque and brings about the dialogue related to that NPC. When the user looks clicks away from the NPC and clicks, the dialogue disappears, and the panel becomes more translucent again.

Dialogue System To-do

  1. script user’s ability to scroll through several panels of dialogue
  2. test with multiple dialogues
  3. test with multiple NPCs
  4. script automatic movement
  5. script movement restriction upon dialogue initiation
  6. script movement freeing upon dialogue ending
  7. bring it into the larger build
12/5

The Play Tests

Shortly after my last post, Yeji had extensive dialogue written, and we all met to finalize the story. Ganjina continued to push through on the environment, finishing up both the initial and reflection cities. With the dialogue system I created a series of demos.

The first demo was not much more than the system, a way of explaining how it worked to the user for feedback during our very first play testing system.

We received feedback to change the reticle so it was more noticeable, to speed up the walk speed, and to decrease or eliminate the delay between clicking and the starting of walking. During the play test, we also explored how to drive our story forward, several ways were proposed. We could used the dialogue by restricting or obscuring actions until the player made progress. We could also use the time of day to push the user to move on from the daylight scene.

Using this feedback, as well as dialogue developed my Yeji, I built a second tutorial which demoed the way we decided to organize the characters and interactions in order to drive the story forward. At this point, Yeji jumped on working on animating the characters while Ganjina put the finishing touches on the environment.

Using real characters and real dialogue created some design problems. While there were many small ones, one of the larger ones was how to trigger the indicator that the NPC could be interacted with on the front and back end. Using the GVR reticle pointer system, I set the distance to 2.5. I then set up a collision detection system with spheres around each NPC. Done, right? No. The player would have to scroll through the dialogue when inside the sphere collider by clicking on the dialogue panel. However, because they were still inside the sphere, the raycast would detect a hit on the NPC, not the dialogue panel.

pseudo-cylinder colliders below dialogue panel but tall enough to detect player

I put the sphere colliders on children of the NPC game object and set those children to the layer “Ignore Raycast.” Nope, same problem, most likely because the box collider on the body still had to be clickable to activate the NPC interaction, so I could not put it to “Ignore Raycast.” My final solution was a compound collider made up of four box colliders set at 0, 22.5, 45, and 67.5 rotations of the y-axis around each character. The created a pseudo-cylinder that rose high enough to detect the player controller entering them but low enough that they would not encounter the dialogue panel. This worked fairly well, but I did not figure out that “Ignore Racast” existed until after the second demo was ready, so the dialogue panel was very buggy in this demo. Similar problems with the scene-switch button meant play testers had to run through multiple times to encounter all the NPCs.

An additional problems at this time was a lack of order in the night scene. Users could just go up to anyone. The GVR reticle was also not accurately indicating when the user could activate a component.

Because of these problems, we received limited feedback. There needed to be an indicator that there was more dialogue when there was more dialogue and not more dialogue when there was not. For this, I added next and stop buttons to appear at the appropriate moments on the panel (see video at end) One play tester also suggested bringing the dialogue box up and to the side rather than down low. However, the positioning of the panel should be based on experience in VR rather than on a computer. While a panel low might be annoying on a computer, looking down at something in your hands or near chest height to read information about something in front of you is a fairly common interaction. It was something I decided to try at the end if we had time, and we did not. One piece of feedback that was implemented after this session was the change from a panel that switches between opaque and translucent and one that switches between active and inactive, minimizing its on-screen time.

Final Development Stages

At this point, Ganjina had finished the environment and started finding sound files and testing ways to switch from one scene to the next. Yeji would orient the characters in the completed environment and determine their interactions. These scenes would be sent to me, and I would integrate the dialogue system into the animations. This was the most difficult part. On the back end, my limited experience with C# made ensuring that everything would trigger exactly how we wanted, when we wanted, difficult.

I could write a small book on all the little things that went wrong during this. What was probably the driver of many of them was the fact that the two scenes had reflected some things and not others, and one of those things was the user’s agency. In one, the user commands the NPCs; they attend to the user upon the user’s request and guide the user to where they want to go. In the other, the user witnesses the NPCs: they ignore the user and let the system explain where they have to go.

I did the second, darker scene, which we started calling “night scene,” first. In this one, every scene’s dialogue had to be broken up, with individual pages triggering and being triggered by different animations and sounds. Also, it took a while to figure out that if I wanted the user to adhere to a certain order with a plague of null references, all I had to do was avoid using else statements. I also became fluent in accessing and assigning variables inside of animators. A good example of this is the “deathTrigger” script, which checks if another specific NPC’s animator is running its “attack” animation, and, if so, triggers the boolean “attacked” in the animator of the NPC it is attacked to, leading to whatever animations result.

I also put together the lighting system for this scene, which indicates to the user which characters to approach and interact with. Some of the lighting cues are based on dialogue, other lighting cues are also based on animations ending, as needed.

all the spotlights turned on

After that, I did the first, brighter scene, which we started calling “day scene.” Having characters face the user was easy. Having NPCs walk along a certain path continuously, face the user when the user clicked on them, and then return to walking that path when the user did not click on them, that was a bit harder. I figured it out though, writing some pieces of script in the dialogue manager that stored the rotation of the NPC on Start() and then returned the NPC to it when the user clicked on the final page of the dialogue for that interaction. It would also happen when the user left the detection colliders of that NPC. Animation start and stops were also made to correspond. I created separate idle states with trigger transitions that would be triggered by the dialogue manager, or another script if necessary, when the user clicked on the NPC. Hence, when a user clicks, the NPC would stop moving, look at the player, and do an idle animation. When the user clicks all the way through the dialogue or walks away, the NPC returns to its original orientation and starts walking, or simply continues to stand or sit.

animator for pacing, user clicking triggered transition to wave

Once I had these scenes done, I wold send them to Yeji, who added the final sound pieces, and then we worked on developing an aesthetic trigger for switching scenes that functioned the way we wanted it to. The trigger only appears in the first scene after the user speaks to the king. Dialogue guides them to the king, though they can choose to speak to him first if they wish. The trigger only appears in the second scene after the sixth mini-scene has completed, ensuring that the user stays in the night scene until all six mini-scenes are witnessed.

scene transition orb

Generally, I learned many lessons about C# and about the power of scripting in Unity. It is frustrating that the end product is not perfect. NPCs do not always turn properly when clicked on. Sometimes the dialogue panel refuses to appear. When we demoed it for the class, we found a hole in the colliders that we had never encountered before. However, I think the experience can go one of two ways. The user can skip through the dialogue as fast as possible, hit the big shiny orbs and then fumble for the escape key or alt-f4 to quit. Or, the user can slow down, pay attention to the characters and read the dialogue and come to understand an interesting story about what is reflected in the mirrors of the people we encounter everyday, and what we all pretend to forget. Given the timezone differences between our group and our relatively amateur skill sets, I think we did fairly well. So, without further adieu, here’s a run through

Project 3 Developmental Journal | The Fall of Octavia

For the final project, Ellen, Steven, and me decided to create an impression of the city Octavia (in the Invisible Cities by Italo Calvino) as a backdrop for a mother’s journey to find her daughter in the midst of the city falling.

The city of Octavia struck us as being precarious, with its foundation being held by spider-web structures made of ropes and ladders across a deep valley sandwiched between two tall mountain peeks. It was believed that the fate of the city was certain: one day, the ropes would snap and the city would fall, bringing with it the people of Octavia.

Illustrations of the city of Octavia. These serve as a nice departing point for us to construct the city in a 3D environment.

Initially, we envisioned the city to be destroyed by a monster spider that laid the spider-web foundation of the city in the first place (on which houses were built upon by Octavians). The spider came back to its spider web only to find out that it was already stolen by humans; thus, it was convinced to take vengeance and destroy the city. The main character now has to save the city.

However, after sharing the idea with everyone and received mixed feedback on the incentive of the main character (whose motive was not clearly defined as to whether to save the city or to save the people), we decided to ditch that spider idea altogether.

Initial rough sketch

Instead, the story now involves a mother and her journey to find her daughter in a chaotic, gradually disintegrating city and get to the mountain before the entire city collapses.

A big part of the experience will revolve around the environmental storytelling. Along the way of finding her daughter, the mother will witness parts of the city (ladders, debris…) slowly fall down into the abyss while people frantically try to escape. Some people cry. Some people yell for help. We hope to be able to capture the essence of the city and its people in its demise.

We have yet to decide whether the mother’s journey should be simple (along a linear path that has been predetermined and events can be thus linearly triggered) or more complex (that allows more degrees of freedom). We have to take into account the limiting factor of the Google Cardboard when devising the storyline of the experience. Also, we should think about whether or not to keep a time limit (which will also dictate that there are two outcomes: the mother find her daughter just in time and make it alive, or she fails and falls with the city), or not (which implies that the user can take as much time as possible to accomplish the task, which then begs the question of what keeps the user incentivized to proceed through the experience with the kind of emotion and pressure that a mother in that situation would feel).

[Updated] April 22, 2020

After many discussions and feedback, we decided that the mother’s/father’s journey to find his/her daughter in the falling city of Octavia to be quite linear.

The 360 revised sketch of the scene. It can be seen that there is one main long stretch of pathway across the city, connecting the two mountains together. From there, houses on smaller platforms branch out. The main pathway will be the only navigable space within the scene, which considerably helps simplify the experience.

[Updated] April 29, 2020

I took on the role of creating the Non-Playable Characters (NPCs) that would help to further depict the demise of the city by incorporating human elements to the environmental storytelling of the experience.

As we picked the Medieval Unity package for the houses and animate objects, it is clear that the characters should have the same aesthetics.

The scene’s Medieval aesthetics

The majority of Medieval characters we found are either too expensive or too few to work with; therefore, I try to create them from scratch.

Even though I have experience in Adobe Fuse and Mixamo to create characters with compelling animations, I stumbled upon so many challenges:

  • Adobe Fuse’s aesthetics is too modern for our project. I could only find one such costume online. Even then, this costume proved difficult to work with down along the line.
  • First, the original costume file (.obj) froze up Adobe Fuse because it contains too many faces (more than 20,000). I had to resize the file in Blender to a roughly a third of the original size so that it could be imported into Fuse:
  • Even then, when importing the costume into Fuse, something weird happened:
The costume doesn’t cover the character’s body completely no matter how hard I tried to reconfigured the body’s size. It seemed that there was something wrong with either the original file or with the resizing process.
Not to give up, I tried remodeling the costume file in Blender…
…which proved to be working. However, I had to go back and forth between Blender and Fuse multiple times because there were no way to see how the costume fit into the character’s body.
After many iterations, I finally got something that looked acceptable!
This will be the daughter character. Below are some other women characters:

For now, I think I am done with women characters. The next step is to find suitable Medieval male clothes and create some men characters. After that, I will be able to plug the characters into Mixamo to animate them and bring them into Unity.

[Updated] May 3, 2020

After modeling the characters in Fuse and animating them in Mixamo, I imported the character into Unity. The import process, however, was far from straight-forward. I had to tweak around with the material’s different rendering process to pull out the most accurate depiction of the character.
Here is a before-and-after example for the eyelashes.
An overview of all the NPCs (of course, they would be scattered across the scene later on)
An overview of NPCs’ animations. Most NPCs have two different animations: a default animation that plays from the scene’s beginning and a secondary animation that is triggered if the player comes closer to them (within a radius of let say, 20). The idea of such “interaction” is to free the player from having to concentrate on gazing or clicking on the NPCs; rather, the environmental storytelling told by the NPCs naturally unfolds as the player walks across and explores the city

[Updated] May 8, 2020

This is the place where the daughter will be hiding. I put some fences around so as to guide the player to approach the daughter at a right angle. I will add sound cues to suggest that the daughter is behind the fence when the player approaches this point in the game (the sound of the daughter crying / shouting for help). The daughter starts waving at the player as soon as the player comes to the other side of the fence, i.e. closer to the daughter. The daughter will also shout out “Mom…mom… come closer” to suggest the user coming even closer to her while she waves. As soon as such interaction is achieved, the daughter will “stick” to and move along with the player for the rest of the game. I also added a script that makes the daughter face towards the player when she waves.

[Updated] May 11, 2020

We received some feedback during the play-test session regarding introduction to the scene, i.e. setting up and giving the audience some backgrounds on the city and what is going on. We decided to make use of a flyby camera to show the entire city from above while narrating the city as a way to open the scene. While the implementation of another camera in the scene was easily achievable, we encountered some problems integrating it to the existing scripts that are using the other First Person character controller and camera, which have all been resolved.

“Octavia, the spider-web city, is nested between the slopes of two steep slopes, suspended by ropes, chains and bridges. Suspended over the abyss, the life of Octavia’s inhabitants is less uncertain than in other cities. They know the net will last only so long. The fateful day has come, the bridges are breaking and the ropes snap, leaving the inhabitants to face their city’s terrible doom. Find your daughter, the one thing you value in the city, and flee to the slope on the opposite side to safety… ”
I used a black image on top of the canvas to implement the fading effect when changing scene by changing its opacity
This is the code for the camera flyby. Originally, we kept track of the z position of the camera to determine when to terminate the flyby and change to the first person controller. However, this proved to be unreliable as sometimes it would reach the pre-determined z position too soon and terminate before the narrative stopped (because of the inconsistent Update calls). Thus, I did some research and use Time.deltaTime to keep track of the real time in seconds, thus syncing with the narrative and the speed at which the flyby camera moves.

Development Journal – Final Project

12 April – Brainstorming + Story Board

For this project, Ben, Chris and I decided to go for the theme of escape room. We came up a few ideas when trying to narrow down what specific scenario we wanted to create. And finally we chose to make the protagonist sit on a wheelchair while moving around and seeking clues. We will let the user sit on a chair to simulate the experience on a wheelchair, which can also fit how Google cardboard work. The wheelchair not only gives the limitation in terms of movement in the game, it may also be a core part of our story. We are going to choose the story background in a theater/hospital/laboratory/retirement house because the wheelchair can be connected to these locations. And it will also relate to the protagonist’s experience or identity or gives him reason to take some action.

Storyboard by Ben

So far, we spent most of time deciding on the general direction and the mechanic of how the wheelchair will move. We also started to compose our story and design the process of escaping to make the whole thing cohesive and intriguing.

Mood Board by me
Another Storyboard by me

20 April – Paper Prototype

We discussed some story details and made this paper prototype for first testing. We only put the key objects on this simple hospital map to give the general idea. The character can move around on a wheelchair to explore this space. During the paper prototype testing, we’ve received some feedback about the navigation and the style. After the session we also reconsidered how we’re going to construct our story in a better way.

26 April – Scene Layout

We’ve figured out how to control the character. Basically the character will move forward by a long clicking and be able to interact with objects by a short click.

For the scene part, I started from building a basic hospital structure with a few sections. At first, our mood board chose some horror style, but later we found we all preferred more psychological horror and tried to go for a creepy clean style. As we were searching for assets, we didn’t find any that was very suitable. So we just chose a zombie hospital asset as it has a complete set of hospital stuff. But I don’t want to go for the exact same style as provided in its demo scene, so I tried to find a way to set the light in a sense that it look like psychological horror. And we all agreed on this change and these are what we have right now.

Layout

4 May – Play Testings

This week we did two play testings. In the first testing we had two separate builds: one on the movement and the other on the scene. But in the second testing we were able to combine the two parts and mostly tested on the space scope, speed choice as well as the general scene settings.

Here’re some points I gathered through the testing:

  • To provide the motivation to escape;
  • To limit the angle to look down;
  • To add more stuff / interactions;
  • To add some audio to wheelchair;
  • To add some glowing effect;
  • To implement object pickup animation;

There’re quite a few useful points and we were also inspired by some of them. By the end of the project, we only had the last point left due to time constraint.

6 May – Ending Scene

To make our story complete, we also decided to add an ending scene after the character managed to escape. Instead of the first perspective, this ending scene is a third perspective from a monitor screen.

When building this scene, the monitor screen is in fact a green filter on UI canvas and the red circle is made by two cylinders. One more detail is in the animation of the player that he will idle back and forth for a few frames before leaving. So it’s more natural to transit from the last scene. Later, this scene also included the audio of computer talking which can help with our illustration of the story idea.

Screenshot of the Ending Scene

9 May – Sound Effects + Story Reconstruction + Photos Editing

To create more immersive experience and give the character motivation to escape, we think adding audios can be a good way. Besides the basic wheelchair sound effect, there’re sounds only played once at the beginning, sounds within a range of area, and sounds that will be triggered if the user enters certain space. The combination of unknown footsteps and baby cries near the mortuary is meant to create some tension and indicate something undesired may happen. There’s also one moment when the sound of moving beds is left-panned to make the user feel something is on the left. But when he steps out of the trigger area, the sound will be cut off as if what he heard is only illusions.

The tricky settings in this part is that the spatial blend should be set as 1 and doppler factor as 0 to achieve the 3D sound effect. Also for the sound of wheelchair moving, it’s not natural to use Play/Stop to control. Instead I found adjusting the volume only can be a better solution.

Sound Effects Demo

For our background story, originally we set it as a world of selling happiness, and that’s why the clue photos are all laughs. However as we kept polishing up narrative details. We think replacing it as an AI/machine-dominated world can make more sense and be more consistent. So we started to guide our narration to that direction.

To match the style of body model, I also updated clue photos as follows:

Low-poly body model
Edited Photos
Original Photos

12 May – Interaction + Scenes Transition + Keypad GUI

As we separately did some portion of the project, we finally combined all of them which includes the wheelman animation, the door animation, keypad system, scenes transition and photo collection interface.

WheelMan Animation

Originally we had some texts and a flickering cursor on canvas in the ending scene, but it was not elegant enough, so we used a dissolve effect to do the transition between the two scenes.

Original Try with Text Animation

For the keypad system here, we also fixed every problem we met, like fixing the trigger state and placing it right in the center. Also, there’s a “?” at right bottom of the keypad by clicking which you can get an indication of “five-digit password” on screen. At first the user needs to click again after the door opens to enter the next ending scene, but it’s not that intuitive to make him click again. So we just use an “Invoke” function to make a delay after the door animation so that the scene transits more naturally.

Door Animation with transition to Ending Scene

For photo collection, when the user clicks the glowing album at very beginning, there will be four red rectangles appearing at left bottom to indicate there’re four photos in total. By clicking different objects, the user can collect photos one by one and get the clues.

Photo Collection Interface

We also met a weird camera shaking issue which we are still not sure about the reason. But we later solved it by simply fixing any rotation axis.

Project 3: Development Journal

For this final project, Neyva and I were inspired by the prompts of wonderland and escape room. For us, escape room loosely represented the existence of a motivation or objective for the player that would result in some sort of relief. Wonderland then served as inspiration for our setting, which led us to consider fantasy or supernatural elements for it. We eventually started discussing the possibility of obtaining inspiration for our experience from folklore – more specifically, Japanese Yōkai folklore, which deals with supernatural monsters, spirits, and demons. After researching different Yōkai, we came across the kitsune, or fox spirits with various abilities, such as being able to shape-shift into humans. According to Japanese folklore, kitsune could have up to 9 tails, with the highest number of tails representing the fox’ age, power, and intelligence.

There are also various types of kitsune. The two that are key figures in our game are the following:

  • Nogitsune: Nogitsune are wild foxes that do not serve as messengers for the gods. They are known to torment, trick, and even possess humans, sometimes using their shape-shifting powers to do so.
  • Zenko: Also known as benevolent foxes, the zenko are mostly associated with the deity of rice, Inari. These kitsune are white in color and are also known to be able to ward off evil, also at times serving as guardian spirits. They also help protect humans against nogitsune. 
A zenko kitsune with 9 tails
Wild kitsune, nogitsune

Given that representations of kitsune are usually found in shinto shrines in the form of statues, we decided to situate our game in a shinto shrine as well.

The Fushimi Inari shrine in Kyoto has many statues of Inari’s kitsune scattered throughout (please disregard the watermark)

In terms of our story, we decided that we would like it to be based off of the zenko and the nogitsune foxes. This is how the story/experience would pan out:

  • User finds themselves in the middle of a shrine/cemetery during sunset
  • As the sun sets, the environment starts looking more hostile/surreal (haze, colored skybox, creepy background sound)
  • Once the environment is fully “surreal”, two foxes appear in front of the user. Both have 9 tails and look similar. (one is an Inari fox, the other is a wild fox that has disguised its appearance)
  • The user is prompted to “make a choice” and pick one of the two foxes.
  • If the user chooses the Inari fox, the environment goes back to how it normally was (we are still considering different options on how to make this outcome more interesting/exciting)
  • If the user chooses the wild (bad) fox (which is disguised as a good kitsune), they stay trapped in the surreal space.

After pitching our project to the class, we received very helpful feedback from everyone. This is a summary of what we still need to consider as we work on the story/game:

  • Ending: does it end due to a user’s option? Or just naturally? Or does the user just take the Google Cardboard off ?
  • How do we hint at the choice that the user has to make? → we could possibly have the kitsunes be on different path and then the user chooses between them → does this mean that they move somewhere else after following the path? The user appears in another part of the shrine?
  • How do we create a satisfying ending for the good fox? (right now the “bad ending” seems more interesting)

04/29 Update

First, here’s our storyboard for our paper prototyping session. As can be seen, the user starts in the middle of a path. At each side of the path, the kitsune will appear.

Since our paper prototyping sessions, Neyva and I’ve been bouncing a lot of ideas back and forth as we continued to decide what would happen with our story. Following Sarah’s advice of establishing definitely what would happen in the story before focusing on the environment building, we considered a lot of options before finally deciding on a sequence that we think is technically possible and which also maintains the integrity of our original story. A first new idea that we had was inspired by a scene in the movie Ghost in the Shell: Innocence, where the protagonists are trapped inside an illusion that has them repeat the same experience/time 3 times until they realize they are trapped, successfully breaking the curse. It’s a really really interesting sequence, which can be seen here from minute 56 – 1:08 (shorter version from 1:02 – 1:08)

For our project, we similarly were thinking that now, instead of just having to make one choice between 2 foxes that either saves or dooms you, you start the experience by getting cursed by the bad kitsune. The curse is having the illusion of choice, of being able to escape by choosing one of the foxes. In reality, with each choice, the same experience repeats itself: the user finds themselves again in the same shrine and presented again with what seems like the same choice. Trapped, the only way the user is able to break the curse is to identify what is off in the environment (what has changed) and clicking on it instead of on the foxes. As we were fleshing out this idea, however, we questioned how hard it would be for users to catch onto the fact that they were stuck in this cycle, regardless of what fox they chose. We were concerned that instead, users would be confused and even bored about the experience if they thought that all there was to it was a cycle of choosing between foxes that seemingly didn’t make a difference. In light of this, we then started thinking about the possibility of telling the user to look closely at the environment, implying that their attention to detail will ultimately affect their experience. As such, following this line of thought, we finally developed how our experience will finally work:

  1. User appears in a shrine/cemetery at sunset.
  2. A text overlay states: “Look closely around you. Click when you’re ready.” the user has the option now to look around and pay attention to their surroundings, and decide when they are ready to continue
  3. Once the user clicks, the atmosphere changes eerie (the skybox turns dark, the lanterns become weird colors). 2 kitsune walk towards the user and sit at a distance from them. A new text overlay states: “Select the 3 changes”. An overlay on top of each fox contains a riddle/list of objects that they suggest the reader to pick. The good fox contains a list of the correct choices. The bad fox contains one wrong item. By having this overlay on top of the foxes, the user can at least have a hint of what they can select (or which fox’ advice they’d like to follow), if they are unable to track the changes.  
  4. Using their Raycast pointer, the user must now identify the 3 items/things that changed in the surrounding (this does not include the atmospheric change). Once they choose on an object, it will turn a highlight color to indicate that it has been selected.
  5. Once the 3 choices are made, the following could happen depending on whether the items are correctly selected or not:
  6. If they are properly selected: the bad fox walks away and the environment goes back to normal. Overlay states: “Good job! You made it.”
  7. If they are not properly selected: the bad fox walks towards you. Overlay states: “Wrong choice”. Everything goes black.

And an update on how the environment is starting to look like:


05/04 Update

Our playtesting session today was really helpful in giving us a better sense of how to hone down our interactions. These are additional notes we took during our playtesting session today:

  1. Give better indication at the beginning of paying attention to details. Mention some change.
  2. Possibly go back? Possibly do 3 rounds or something like that? –> perhaps this is not necessary if the text at the beginning is obvious
  3. Right now, second change looks like nighttime, change so it looks more surreal
  4. Sunset: take out shadows
  5. Have the text in front of you as soon as you go in. Experiment with overlay vs with set position

05/07 Update

After the second playtesting sessions, here are some additional notes that Neyva and I are considering to improve our project. Update, 5/13: after implementing the changes, I’m adding more descriptions on what we ended up doing.

  • Text resolution/canvas overlay: must be responsive to fit large resolution screens
  • Text overlay: in order to avoid people from clicking instantly and skipping the first part of the experience, we decided to implement a script that disables mouse clicking after 10 seconds. After these 10 seconds, a text will be shown prompting people to “click when you’re ready”. Furthermore, after clicking once, users are prompted, “are you sure?” so they reconsider this choice.
  • Scene change: we still need to make the new environment seem more surreal/ominous. This can be done by changing the skybox to make it have more unnatural colors and perhaps adding fog or another particle system. This is how the lighting looked like at first, when we wanted to have the user start at sunset:
This scene already looked a bit ominous with the pink ambience and the skybox

After realizing people would confuse the change of scene with just nighttime due to the fact that they were previously in a sunset setting, we decided to change it to being during the day. This would make the change of scene more prominent.

Changing the skybox to a sky blue and changing the rotation of the sunlight was key in giving the feel that the setting was during the day.

Layout: to avoid people from thinking that they can potentially move to other parts of the road throughout the experience, we have decided to change the layout of the shrine/cemetery. Instead of placing the user in what seems to be the middle of a road, the user will now be placed in the middle of a circular layout, with only one opening (which is where the foxes will come in from). By having everything directly surrounding them, the user would now be able to pay more attention to the details surrounding them. This is how the environment originally looked like:

Users would find themselves in the middle of this path, which unfortunately gave the sense that they could potentially move throughout the space
Having so many items laid out in this vast space was also very overwhelming for users, as they weren’t sure where their attention should be

Objects: following the previously mentioned layout, we decided to place more “flashy” and distinguishable objects in front of the user to emphasize how these are the ones that will potentially change, not the ones in the background.

Having items that were noticeably background or foreground was key in directing users’ attention
Having such a big lamp such as this one enabled it to stand out from the other simpler objects
  • Movement of foxes : how does their movement start? do they just appear? maybe every few seconds they switch between sitting & standing idle (to make them more realistic). In the end, we decided that both foxes appear running towards the user. Once they stop, the new instructions appear, suggesting that these are related to the foxes
  • The pointer: originally, we wanted the pointer to change when it hovered on a selectable object (we decided not to implement this in the end as we realized that the changing color of the hovered object material is enough indication for users to know they can select it)
  • The riddles: the riddles for us were key in giving more depth to the experience, as well as involving the foxes more into our narrative, as we had originally envisioned. In a way, even though users are not necessarily selecting foxes anymore as we had thought at the beginning, they can choose which fox to trust. Regarding the content and style of the riddles, we aimed at making them seem cryptic yet understandable after a few read-throughs, and we hope that players are able to take the time to try to decipher them.

Explanation of riddles:

Right (correct answer)

  • “In our likeness we stray from the path, one good one bad”: referring to the identical fox statues changing their facing direction
  • “Look for the red, that emerged from the stone. Both small and large, they will return you home”: referring to the small tori gate that turned from being stone gray to red, as well as the surrounding fence that completely changed from being stone to being red and made of wood
  • “Look for the red, that emerged from the stone

Left (‘incorrect answer)

  • “One light guides the path to where you came. It burns not”: referring to the candle (wrong choice)
  • “As the stone grows cold, a red outer edge is your first guide”: referring to the fence that became wooden and red
  • “Only one of us will save you, although both of us are key”: referring to the fox statues

Development Journal | Final Project

PROJECT MEMBER: Luize, Nhi, and Tiger

PROJECT THEME: Apocalypse/Escape Room

IDEA DEVELOPMENT:

In our first meeting (via Zoom), we decided a few elements that we want to explicitly convey in our projects before brainstorming: 4 final project theme ideas (Apocalypse, Escape room, Wonderland, Create an interpretation of a city from Invisible cities), a fictional space, interactions, events, and the sense of storyness.

In the beginning, we thought of recreating 3 cities from Invisible cities: FEDORA, OLIVIA, ESMERALDA. The theme would be escape room & Invisible cities interpretation: link to the current situation where everyone is trapped in their own space and try to escape the state of mind, connecting with other people through the Internet – a way of escaping the reality we are living in right now. Each city has different unique inhabitants, for example, OLIVIA has skeletons since it reflects the industrialization and the repetitiveness of the work people do every day, etc.


However, since our main focus of the project is the sense of storyness, we found that our approach of recreating the invisible cities did not reflect what we wanted. We brainstormed a new different idea of the escape room theme. The context would be: Protagonist (the user) is a prisoner, wakes up from amnesia, finds themselves in a dark small cell. There is a giant clock on the wall of the cell showing red digits and counting down from 1 hour. This, hopefully, triggers the anxiety and makes the user look for tools and try jailbreak. When the user successfully finds the door to escape, there would be different scenes waiting for them (representation of the past, present, and future of a person). In the final scene, the user can find the door to bring them back to reality. The whole message we want to present through this idea is that every moment in life is precious.
This is a better idea compared to the first, but we encountered one problem. Since each user will have their own experiences, there would be no generic way to layout the scenes that can evoke the feelings/emotions for the user to reflect on. Therefore, we decided to revise the idea into a more neutral setting, which is the undersea environment.


Final idea:

  • Beginning scene: neutral, white background – a television (off) -> user must interact (turn on/off) the TV to enter the undersea alternate reality world.
  • The same idea before but different scenes, different message: travel through time in the ocean to see how the environmental change reflects during each time period.
  • THEME: Apocalypse
  • The user would be under the sea, there would be a line (road) that indicates where the user should move to (sunken ship (1920), submarine (2020)) where they would find a button to enter the same scene 100 years later.
  • The scenes (3 scenes), in each time period, you still have a clock/sign somewhere to indicate the time (years, explicitly)
  • 1st scene (past, 1920): no plastic
  • 2nd scene (present, 2020): a lot of plastic but still can be saved
  • 3rd scene (future, 2120): a lot of plastic and no animals -> can’t be saved anymore. In this 3rd scene, the user will need to dig into the plastic in order to find the button and travel back to the current time (reality).
  • You go back to the present (the beginning scene) and take action to do something to save the environment.
  • Message: save the world before it’s too late.


Some clarifications/class feedback/adjustments:

1. The meaning of the TV in the first and last scene: The user needs to interact with the TV in order to move to the undersea scene. What we had in mind was that we could try showing different scenes of the ocean before the user actually experiences it. It’s similar to the fact that most people would just know about the undersea world through the screen but not by actually experiencing it.

2. Reduce the number of scenes to 4-5 scenes: Though it’s a lot of scenes, it would basically come down to these 3 ideas: first is the TV scene which is extremely simple, second is the outside undersea scene (appear three times with different levels of destruction) and third is the inside scene (sunken ship and submarine). In short, we only layout 3 main scenes and then replace a few things in each scene to demonstrate what we want.

3. The use of the button (click): not to have a literal button that triggers changes but something more subtle that blends in with the story – the nautilus

UPDATE April 17, 2020

Tiger and I finished the list of needed assets and created the first scene in our project. In this scene, we added a screen to the TV, which will be used to display the video later.

First room the user enters

UPDATE April 20, 2020

After the lecture and class discussion on Procedural Authorship, our team felt like in our project, the players would be in the role of “Ghost without impact” since they will only observe what happened throughout 300 years and do not have any real impact on the environment. Hence, we decided to create some interactions between the user and the environment and limit our scenes to only 2 main scenes:

  • The first scene: the users will enter an apartment (which is also their house in the game) where they see some snacks, water bottles, cups, and cans on the table and on the floor. They will get a chance to interact with the objects by grabbing and releasing them. They can also move by clicking the mouse (clicking the button in Google cardboard). The main point in the scene is when they turn on the TV and watch a video/a teaser of the experience they will experience next.
  • The second scene: the users enter the underwater scene of 100 years ago. When they explore and interact with the undersea animal, they will leave a trace of plastic behind them (can be the cans, water bottles, or those cups that they saw in the first scene). We also hope to make the scene gradually polluted (sea animals/plants gradually die) that also represents the 3 initial 3 scenes we had in mind (1920, 2020, 2120).

UPDATE April 26, 2020

We finished laying out two basic scenes.

In the first scene, I added objects for user interactions such as cans, chips, water bottles, coffee cups, and wrote scripts for PlayerGrab, PlayerWalk. I also wrote the SceneCtrl for switching scenes later. I also added event triggers for the objects so that when the users gaze at the objects, they can click the mouse/button in Google Cardboard to grab/release the objects.

UPDATE April 29, 2020

After the team check-in, we all agreed on the current design of the environments (of the room and of the underwater scenes) and finalized the interactions we are going to add in the underwater scene. Currently, the user is able to look around using ctrl + moving the mouse in the direction they want to see. They also are able to walk by clicking the mouse (the button clicking in Google Cardboard) in the two scenes.

  • The final interaction we are going to add in the white room is the user’s interaction with the TV. When the user looks at the TV, it is expected to change color from black to white. When the user clicks on the TV screen, it is going to show the below video, which is designed by our team member Luize.
  • In the underwater scene, every time the user walks around, they will leave a trace of plastic behind them. They can also interact with the sea creatures and animals, and there would not be any immediate effects. However, the scene would change gradually: the environment becomes darker; the fish disappears gradually, etc. The user might not notice this but over a period of time, the change would be significant enough for them to realize their negative impact on the ocean.

UPDATE May 05, 2020

After the first playtesting, we realized that the first scene was not well designed and thus prevented the user from interacting with the objects in the room. Since we wanted to create a setting that truly reflects the daily life in an apartment, we decided to recreate the scene. I was in charge of redesigning the scene and adding interactions in this scene, while Tiger and Luize focused on redesigning the second scene.

In this first scene, I added corals and sharks to hint the user towards something related to the underwater scene. When the user interacts with the objects (chips, coffee cup, milk bottle), they would be constrained on the vertical line, only moving up and down the objects. I limited the movement because I could not figure out the way to make it look natural when the user drops the object. Also, the user can click on the TV screen and the TV will show the video. After the video finishes, the coral on the TV shelf would be lit up, inviting the user to interact with the coral. When they click on the coral, it would lead them to the second scene.

In this underwater scene, after Tiger and Luize finished the design, I added the player walk movement in this scene to make it consistent with the movement in the previous scene.

UPDATE May 08, 2020

After the second playtesting, we realize that the constraint on the movement of the objects made the interaction meaningless. Professor Sarah Krom has been really supportive and helped us out with this problem (by adding a Rigidbody to the food object so we can take advantage of physics when dropping it). I am currently finishing the final touch on the interactions of these objects. I also added the script to hide cursor whenever the user enters the scene.

UPDATE May 11, 2020

The scripts for the food objects worked perfectly thanks to the help of Professor Sarah Krom. The user is able to grab the food objects and drops them anywhere they want. However, there was one problem I encountered when I was working on this part. As we grabbed the food object, its kinematic is set to true, thus ignoring collision. The fix for this problem is Edit -> Project Setting -> Physics -> Contact Pairs Mode set to Enable Kinematic Static Pairs. This will make sure that the collision is still detected when the object is grabbed in hand, thus releasing the object whenever it collides with other game objects in the room.


UPDATE May 12, 2020

While Tiger and I worked on the final touch, mostly for the second scene, for the project, Luize prepared the presentation. I replaced the FPS controller with the player object that can only move by clicking the mouse. Since the movement in the underwater is different from movement on the ground, we decided to keep the movement of the user near the seabed as if the user is swimming through the path.

We also adjusted the frameCount in the scripts to change the speed and the number of plastics, the change of light, and the disappearance of the fish in the ocean. We also adjust the switching scene script to enable to user to go back to the previous room, which is their daily life.

We also thought whether we should change anything in the first room when the user goes back. After discussion, we all agreed that we decided to keep it the same because without any change in behaviour, we cannot expect the change that easy in a person’s daily life. This is the representation of a infinite loop that can only be broken by the change in the awareness and behaviour. And though it is easy to realize how much plastic can be generated by one human being, it is challenging to replace the convenience of plastic in our daily life even though we realize the negative impact of it on the environment.

Development Journal: The Fall of Octavia

For this project, me, Vinh and Ellen aim to create a narrative experience based on the destruction of Octavia, one of the invisible cities. The reason why we want to depict Octavia in our experience, more specifically its destruction, is because it is described as a city in a precarious situation. This is because the city is suspended in the air between two mountains by ropes and chains.

Calvino writes: “Suspended over the abyss, the life of Octavia’s inhabitants is less uncertain than in other cities. They know the net will last only so long.

Manisha Dusila - BA (HONS) Computer Animation Arts, UCA Rochester ...
Depiction of Octavia

We narrowed the experience down to one inhabitant’s quest to escape the city on one of the ropes (that holds the city together) into the mountain. But first, the inhabitant must find his daughter in the city. This allows the user to experience our interpretation of the city and the realities of the inhabitants facing their city’s doom, incentivized by finding their loved one lost in the city’s streets.

We want the story to illustrate how the city’s dangerous location has shaped the culture of Octavia’s inhabitants. As of now we are leaning towards using the destruction of the city to show their grief of losing the city, and perhaps their lives, but we also thought about giving the city’s inhabitants a more fatalistic attitude towards the city’s destruction. As Calvino writes that this is something that the residents of the city are aware of, we thought it might be possible that the city’s inhabitants would not necessarily resist death and destruction but rather embrace it. This is still something we are considering and we want to portray the destruction of the city as something that causes different responses, just as any disaster would.

We thought a lot about how the user could move using the Google Cardboard. After exploring a few titles with the Cardboard and realizing that most of the interactions of this title did not involve movement to create a powerful experience, we are now leaning towards having a visual cue in front of the user in the direction in which they can walk. When the user hovers over this cue (an arrow, for example) this will bring the user forward. As a result, we currently see this as following one straight path, going through a street in the city until they eventually find their daughter, and then leaving the city on a footbridge to conclude the experience. We intend for the button on the Cardboard to be used for calling the daughter. The user can move along the prescribed trail and look around and press the button to call for their daughter.

Update 4/29:

We have started work on designing the scene of the experience. Vince has taken charge of character animation, Ellen with scripting interactions and movements with Cardboard and me with designing the environment and the destruction of the city.

The style we decided to follow was that of a medieval town. We have one big stretch that contains most of the city as well as other floating components that add to the environment of Octavia.

I have worked with a few destruction scripts I found online that allow objects to be destroyed into many pieces. I also have a game timer that allows me to script when each destruction animation occurs. The difficulty now lies in the exact effects that will happen when the city gradually becomes destroyed. Here are a few screenshots of the scene so far:

Update 5/9:

Over the weekend I worked on adding sounds to the scene. More specifically I worked on getting the sound to be ambisonic, the further the camera is from the game object that is the source of the sound, the more quiet the sound is. For now, we have it attached to the daughter as a prompt for the user to search for the cries that cut through the other environmental sounds (wind and fire). I also added a loud earthquake-like noise that plays as the user crosses the bridge into the mountain, as a prompt for the user to look back and see the destruction of Octavia. I also finished scripting the destruction of the city. To do this, I separated all the contents of the scene into eight different game objects. When the player would cross the bridge, this would trigger the placement of a rigid body on these game objects which would prompt their descent into the abyss.

Apart from the falling objects that are triggered based on the current time of the game, I also made some objects fall when the user gets into a certain distance of the object, making the city’s destruction more immersive and real.

We also worked on importing the Google Cardboard reticle functionality into the scene along with the ground that the user is supposed to walk along.

Final Project Development Journal: Wheelchairs?

May 10, 2020

It’s been awhile and there have been numerous updates for this project. We each worked on different things and there is still a lot to do. Keyin wrote the story in which the escape room would follow. She also illustrated some of the photos that are used as clues in the game and created the environment. Ben worked on character movement and the physics in the game.

As of now, this is what our escape room looks like.

Figure 1: Environment

The movement and physics are show in the links below:
(Movement) https://streamable.com/gs96wb?fbclid=IwAR2KHC1-RPg-UoJsrxnpihHjj8crV-yYtwkCyq2GxCcdt10PNLL4WC6ohIE
(Physics) https://streamable.com/1ae7el?fbclid=IwAR2dNoTDX2IRfwa5JZGAW4Jj6P7e9qWQGbA6AmRELT0Q3lU9LeQvJaFk-vg

One important thing that we wanted to implement was the view from the wheelchair. We are aiming to create something like this:

Figure 2: Wheelchair View

I primarily worked on animating the door and writing the scripts for it. As of now, I have only implemented something simple: if you press the spacebar, the door opens. This was a similar script to the tutorial that was shown in animation. I intend to complete the password script soon. However, I did encounter an issue with the door animation. When the door was sliding over, it would slide onto the wall. To resolve this, I removed the entire wall (one wall covered the entire side of the environment) and filled in planes separately. After doing so, I added another layer on top of where the door would slide over to cover it (kind of like a sandwich in which the sliding door slides between the two walls). An animation of the door is shown below:

Figure 3: Door Opening

We have also created an ending scene and I intend to write a script that connects the ending. The ending scene is shown below:

Figure 4: Ending Scene

April 10, 2020
For our final project, we (Ben, Keyin, and me) initially decided on the escape room/apocalypse prompt. However, Ben later on raised the suggestion of making an escape room based off of someone in a wheelchair. We all took a liking to this idea and started brainstorming for potential background stories. Keyin brought up the idea of the user trying to escape from an abandoned hospital or a retirement home. We are yet to fully decide on what our story should be about.

We spent the bulk of our time discussing the mechanism of how the wheelchair would work and what are the possible interactions with the Google Cardboard. We decided that when the user was moving, we wanted the user to be able to see part of the body and the hands/arms as he/she turns the wheels to move. The way to move would be to point the Google Cardboard in the direction you want to go and to press the button on the side. This would then allow the user to move as an animation of the user spinning the wheelchair is shown. In terms of interacting with the world, we thought that the two methods we had are a long click and a short click. We would designate one as the movement while the other would be the interaction with game objects.

Figure 1: Brainstorming and Wheelchair
Figure 2: Wheelchair Sample Scene
Figure 3: Abandoned Hospital 

Invisible Cities Response: Maurilia

Maurilia strongly reminds me of a lot of the European cities/towns that I used to visit. In Maurilia, travelers are encouraged to glorify what Maurilia used to be: a quaint rural town with no particular distinctions. This old Maurilia is preserved and portrayed through postcards indicating where things used to be — for example, a hen in place of a bus stop. The idea is that the modernity of current Maurilia contrasting with the rural feel of old Maurilia is meant to evoke a strong sense of nostalgia. However, the two versions of Maurilia are arguably too different to be considered the “same” Maurilia; rather, it would be more suitable to consider them as two cities coincidentally with the same name.

In terms of a real-life equivalent to Maurilia, the city of Graz, Austria comes to mind. Graz is now the second-largest city in Austria behind Vienna; it is often characterized as an odd combination of future and past. One notable area is located around Kunsthaus Graz, a strangely-shaped art museum that runs on solar power. The museum itself is a stark contrast to the more conventional/traditional buildings around it and serves as a distinct example of the aforementioned “future meets past.” The tourism markets itself similarly; guides often point out what landmarks “are” as opposed to what they “used to be.” It seems that in the perspective of Graz’s inhabitants, there exists a clear divide between Graz now and Graz before, even though the two intermingle within the same space.

Graz - Cities of Design Network
Landscape of Graz
Kunsthaus Graz - Wikipedia
Kunsthaus Graz

Note: Reposted due to odd error/oversight.

Reading Response to Invisible Cities by Italo Calvino

Question: Is there a city that stood out, or that you found especially memorable? Why? Does any city remind you of a city you have lived in or visited, and if so, in what ways?

From there, after six days and seven nights, you arrive at Zobeide, the white city, well exposed to the moon, with streets wound about themselves as in a skein. They tell this tale of its foundation: men of various nations had an identical dream. They saw a woman running at night through an unknown city; she was seen from behind, with long hair, and she was naked. They dreamed of pursuing her. As they twisted and turned, each of them lost her. After the dream, they set out in search of that city; they never found it, but they found one another; they decided to build a city like the one in the dream. In laying out the streets, each followed the course of his pursuit; at the spot where they had lost the fugitive’s trail, they arranged spaces and walls differently from the dream, so she would be unable to escape again.

This was the city of Zobeide, where they settled, waiting for that scene to be repeated one night. None of them, asleep or awake, ever saw the woman again. The city’s streets were streets where they went to work every day, with no link any more to the dreamed chase. Which, for that matter, had long been forgotten.

New men arrived from other lands, having had a dream like theirs, and in the city of Zobeide, they recognized something from the streets of the dream, and they changed the positions of arcades and stairways to resemble more closely the path of the pursued woman and so, at the spot where she had vanished, there would remain no avenue of escape.

The first to arrive could not understand what drew these people to Zobeide, this ugly city, this trap.

—Italo Calvino

Among the various cities that Calvino writes about in Invisible Cities, I find Zobeide the most fascinating and mysterious for me. Instead of shedding much light on what the city itself looks like, Calvino uses a metaphor to describe how it was established. Zobeide was constructed by men going there in search of a woman in their dreams, and contains a lot of dead-end paths intended to cage the woman, thus becoming the “ugly city” and the “trap” it is. In my opinion, the woman is a metaphor for unfulfillable desires, and by building Zobeide, the men forgot what they were really looking for in life, but instead got lost in their own greed.

In the center of Fedora, that gray stone metropolis, stands a metal building with a crystal globe in every room. Looking into each globe, you see a blue city, the model of a different Fedora. These are the forms of the city could have taken if, for one reason or another, it had not become what we see today. In every age someone, looking at Fedora as it was, imagined a way of making it an ideal city, but while he constructed his miniature model, Fedora was already no longer the same as before, and what had been until yesterday a possible future became only a toy in a glass globe.

The building with the globes is now Fedora’s museum: every inhabitant visits it, chooses the city that corresponds to his desires, contemplates it, imagining his reflection in the Medusa pond that would have collected waters of the canal (if it had not been dried up), the view from the high canopied box along the avenue reserved for elephants (now banished from the city, the fun of sliding down the spiral, twisting minaret (which never found a pedestal from which to rise).

On the map of your empire, O Great Khan, there must be room for both the big, stone Fedora and the little Fedoras in glass globes. Not because they are equally real, but because all are only assumptions. The one contains what is accepted as necessary when it is not yet so; the others, what is imagined as possible and, a moment later, is possible no longer.

—Italo Calvino

It is interesting that Calvino chooses to present a general feel for each city more than explicitly describing what it looks like. He writes more about the people than the cities themselves, which leaves much room for different understandings of what each city represents, and audience could have very different perceptions of the same city. Personally, the city of Fedora somehow reminds me of Shanghai. Fedora has a museum that contains all its people’s fantasies of what the city could look like, yet before any of them could come true, the city became something else, and thus fantasies remained fantasies forever. To me, it metaphorically points out that no one could ever predict the development of a city, while it is nothing but people’s expectations for a city that it is built upon. Every city is a result of the collective expectation. Shanghai, in this sense, is a city full of traces from different times in history. You can see modern skyscrapers and ancient traditional Chinese style buildings in one sight. It reflects how the city has been shaped by people’s different expectations for the future over time and becomes what it is now.

Invisible Cities – Response

To begin, I want to state the fact that I read the book in English first, and then read the majority (but not all of it) in Italian. I wanted to see how good the translation was and honestly, the descriptions were exactly the same. I did not find one clear discrepancy in which I thought “this was translated totally wrong.”

The city I chose to focus on was the city of Zirma on page 16. The reason this city stood out to me was due to its memorable description of the people present: “the blind black man” , “a girl walking with a puma on a leash”, “a fat woman fanning herself” , “a tattoo artist arranging his needles and inks and pierced patterns on his bench.” Many of the cities had descriptions of the environment in detail, but I found the characters described above to be very memorable and easy to picture. So much so, I was inspired to draw this image based off of the city.

“A girl with a puma on a leash.”

For me, the city of Zirma drew me in because I wanted to know the back stories of all the characters, and I was drawn in to learn more about them.

The city really reminded me of when I visited Istanbul in Turkey. I remember seeing so many memorable people in which I wanted to know how they ended up in Istanbul; be they tourist, or people residing in the city. I understand that this is completely subjective, as one can find “interesting” looking people in any city if they look hard enough, but . Istanbul was the first time I saw a mix of hipster fashion next to traditional clothing on every street. I was drawn into many of the people I saw, and wondered where they were from, if they lived in Turkey, or what brought them there.