A form of representation suits VR

A form of representation I feel would be suited for VR medium for doctors during a surgery.

It creates an interface to the world where the doctor can comprehensively assess the patient’s physical situation, instead of just seeing from a single dimension screen and being influenced by the surrounding environment. It is common nowadays that a doctor’s hands are operating a surgery while turning his/her head to another direction to see the instant image on the screen, this is an “inhumane” design for doctors because it loses the coherency of eyes and hands. What’s more, there are multiple other external conditions might influence a doctor’s judgement – the light in the operation room, the anxious patient, or even the breathe of the surgery assistants. Therefore by implementing VR into this specific situation will minimize the external influence so the doctor could have a better performance.

By operating with the precise equipment attaching to VR, the doctor will be able to focus on minor details in the surgery (eg. sewing the wound) and eliminate the failure of false judgment.

Project #2 Development Blog

Mar 3

Our group: Vivian, Adham, Cassie, Nico

We started off with some brainstorming for our interactions and actions:

Initial Ideas:

  • Throwing crumpled paper into a basket 
    • Implement points based on how far back you are → makes you move around
    • Obstacles (desk, etc.)
    • Crumpling paper
    • Classroom, library
  • Putting food onto tray- cafeteria
  • Washing face
  • Taking care of plants
    • Zen
    • If you cut the plants they just float around
    • Twisting knob motion to speed up time → plants grow, lighting changes
  • Drawing
  • Slingshot
  • Flipping coin into fountain
    • Something could pop out, you have to catch it

After deciding on the plant idea we enjoyed, we decided to go more into details:

Taking care of plants:

  • Time
    • Lighting changes
    • Sun/moon
    • Plant growth
  • Environment ideas:
    • Dorm room
    • Windowsill
    • Small cottage
    • Outside garden, fence 
  • Interaction
    • Watering
    • Cutting
    • Picking fruit/flowers
    • Growing bamboo

With a solid idea in mind, we went ahead and designed our storyboard:

–Step 1–

Clump of bamboo in front of you

To your side: tree stump with watering can + cutting tool

Surrounding mountains and other bamboo

You’re inside a circle of rocks

Butterflies are flying around

It’s golden hour

–Step 2–

You have the water picked up

Water is gone from stump

–Step 3–

Bamboo is taller

–Step 4–

Replace water with axe

Now the water is back on the stump and the axe is gone

–Step 5–

Show the particles of the bamboo disappearing

–Step 6–

Now an empty spot of bamboo

Our storyboard:

Mar 10

Start to work on the particle system – create the effect of the water coming out of water can when user grab it and pour towards the bamboos.

In order to make the water fro watering can realistic, I changed the following parameters: start lifetime/ start speed/start size, gravity modifier to 0.3, hierarchy scaling mode. Under the emission box, I changed the
rate over time” into 200, and for the “force over lifetime” I adjusted Y into -3 and applies it into “world” instead of local. For the “rotation by speed”, I changed the angular velocity into 300, because I started it with 100 but that way in the game the speed the water moves will not be able to catch the player moving speed.

Mar 13

Today I worked on the particle system to make it on/off when an object is rotated at the certain angle – when the watering can face downwards the water particle system will be on and when it’s at the normal position the article system is off and the water effect won’t be shown.

I reached the goal by using the transform.enulerAngles and catching the Z angle input of the water can object. We have a boolean function called “IsPouring”, so I grabbed the particle system under it and I added the code if the angles are beyond the range then the the system stop, else the system play. And we call the function “Is pouring” under the “void update” to make sure it is running all the time.

There was a small problem when I practice the code – the particle system is alway on when it’s playing. So I assumed it was disconnected from its parent object, then I added the code “Print” on the IsPouring function to check if it’s connected to the watering can when the codes are running. It turned out to be that there’s nothing printed out in the console log, so I dragged the the particle system to the water can to make sure it’s under the component section (although the particle system is already a child of the watering can), and then it worked.

Mar 15&16

Today I’m working on the interaction code that when the particle system is pointing at the bamboo the bamboo will grow (instead of grow when being pointed by the point ray); the floating effect of the rocks (to create the sense of zen in the environment) .

  1. The floating rocks effect:

In order to improve the user experience and create the sense of ZEN, I added the floating effect:

I simply just grabbed a floating up and down code from unity community:

  1. public float amplitude; //Set in Inspector
  2. public float speed; //Set in Inspector
  3. public float tempVal;
  4. public Vector3 tempPos;
  5. void Start ()
  6. {
  7. tempVal = transform.position.y;
  8. }
  9. void Update ()
  10. {
  11. tempPos.y = tempVal + amplitude * Mathf.Sin (speed * Time.time);
  12. transform.position = tempPos;
  13. }

2.Bamboo Grow when been pointed by particle system:

We finished this part by using OnParticleCollision to detect the collision between the bamboo and the particle system that has been attached to the watering can. In the beginning we decided to add the particle system collision detect in the Bamboo OG script, because the growing function is in the same script so it’ll be easier to call, however, even been put into different layers and been set as “only collide with bamboo”, the particle system will literally collide with everything. Then we tried to only write the particle collision detection code in the canfunction and call the bamboo grow function from a different script to make sure the two parts are not messed up with each other. So basically in the particle system we say once it’s collide with bamboo, then it triggers the grow function from the BambooOG script, and then it worked. The codes we uses are shown below:

Mar 17

Today I worked on the the mist effect and it will only be triggered when the sickle is cutting/colliding with the bamboo.

At first I was thinking about attaching the particle system (the mist) to the bamboo script, so whenever it is detected that the bamboo is colliding with the sickle (the sickle is hitting the bamboo ), the result of which is to destroy a GameObject (a piece of bamboo), the mist particle system will be triggered to play. However, this design has two significant difficulties: one is that OnParticleCollision is really hard to be repositioned in the “instantiate” to make the mist effect only be shown on the specific piece of bamboo that is hit by the sickle (since there will be a lot of bamboo grow out of the OG bamboo); another difficulty is that since at the same time the game object will be destroyed will the child function on it has been trigger, they effect will not be shown at all because the moment it’s triggered, its parent also dies so the mist has nothing to show on.

Taking these conditions into consideration, I tried to created a new mist script just for the sickle and it’s separated from the bamboo function so we don’t have to reposition the particle system for each specific bamboo. At first I tried to detect the dillion of the particle system by “OnParticleCollision”, however it turns out to be super hard to be detected accurately since there are millions of small particles and it almost collide with everything. Therefore I switched to detect the collusion of the sickle – once the collision of the sickle hits a game object, the particle system (mist) that is attached to be sickled will be triggered. The coded are shown below:

Molecular Pathways – A Better Representation

In molecular and cellular biology there are two three dimensional aspects that are often portrayed in two dimensions, these are molecules and their interaction within pathways. Molecules are usually drawn as simple blobs or shapes in order to better visualize the different domains that have specific functions (proteins have very complex shapes). With these simple shapes, their interactions are usually connected with a vast map of arrows and inhibitions which form the pathways that biologists study and use to develop function specific drugs or site specific research. This decision to go simple and 2D has made studying and using biology a lot simpler but it has taken out a very important factor, molecules and pathways have movement. This movement, which is important in the interaction of proteins, depends vastly on size and surroundings which are all three dimensional. Naturally, to model this mathematically is quite complex and requires a lot of prior knowledge and computational power but once achieved has great benefits. If these pathways could be brought into a dynamically moving intractable three dimensional world, then there would be the possibility of better research and understanding into medicine. By better understanding what happens when certain interactions are “physically” and “visually” removed there would be less wasted effort in pathway structure and experimental design.

Check out this cancer pathway map: https://www.qiagen.com/dk/shop/genes-and-pathways/pathway-details/?pwid=301

Interaction

My favorite interaction in life is board games. The way they are made is so simple but there are rules that you have to follow. An example would be a game called “Munchkin”. The game is a tiny version of “Dungeons and Dragons”. The goal of the game is to reach lvl10 (or lvl20 in expansions).

The game is simple and complicated at the same time. You interact with it using a rolling dice and some coins and of course the cards.

Representation in VR. Max.

Greetings!

My perfect representation of a VR that will be suited a for the medium is a social media network. I know that sounds a bit strange and introvertish but that is how I see VR. We use the social media every day. Facebook, Instagram, Snapchat and etc. Why won’t we use them in VR. Imagine you are a person that walks in the hall of your friends, watches their pictures, hit the like button, but you are actually doing it in a VR world. I think that would be an amazing representation of VR.

Representation in VR

My friend and I were talking about the fact that despite the radical differences between different groups of languages, the structure of word groups are usually the same. Thus, if we map all the word in English, for example, in a virtual space, we can expect to see that all the English words related to family cluster together. Meanwhile, when we map all the Chinese words in the same space, all the Chinese words related to family will gather at the same space. Such a corresponding relationship between English words and Chinese words makes translation and language learning much easier, considering how different they are in terms of grammars, algorithms, characters, etc.

The reason why I think words should be mapped in a 3D space rather than a 2D one is that the connections between words and word groups are too complex to be represented in a 2D space. In a 3D word, the “distance” between different words will be less skewered: people pick one word/word group, look around, and then they can see all the connected words/word groups around them. This becomes an interface because people could be immersed in the world of words, and seeing the connections between words/word groups shape their understanding of language.

Form of Representation Suited for VR: Coding World

I’ve always been attracted to VR simulations that go beyond entertainment. I firmly believe that the possibilities for VR are limitless and such can be used to solve problems that hinder humans all over the world.  Therefore, I believe that the best VR simulation that can be implemented is educational. Ever since I was a kid, I considered myself to be a visual learner. Despite the fact that I can understand things on a superficial level just by listening, all information that is cemented in my brain has some form of visual representation attached to it. The same applies nowadays as I am studying Data Structures and Algorithms. I always need to spend time making drawings and concept maps in order to fully understand the concepts in class. Some of the concepts are really difficult to follow just by listening to a lecture and a virtual 3D animation of concepts like recursion, binary trees, and sorting algorithms will best be understood by learners all over the world who struggle with these concepts. This world will resemble those platforms like Scratch that teach programming to kids but will match the complexity of higher level programming concepts and algorithms with a medium like virtual reality which will much better represents the concepts.

Teaching Coding to kids

Form of Representation Suited to VR: Immersive Architectural Model Building

When designing a building, often, no one knows what the building would really feel like to be inside until it  is built. Now, with VR, we can explore the inside of a building that hasn’t been built yet, or even a building that is impossible to build.

The form of representation I would like to write about in this post is based on my own experience in creating three dimensional models. I went to architecture school for a year. We were encouraged to start thinking about our projects by making sketches. Though I would start out this way, scribbling in  a sketchbook, I would quickly become frustrated by how, though my sketches managed to capture the feel and aesthetic of what I wanted they never managed to convey a sense of the space. I quickly moved on to making rough three dimensional sketches or sculptures with bits of paper, pizza boxes and a box cutter. This really helped me think spatially, to see in one go what it would take a plan, several sections and an isometric view to see before was immediately visible with the three dimensional sketches.

A few years later, I was teaching myself Maya. The modeling capability of the software was more powerful than my limited real life model building ability. Yet the interface of the 2D surface of the screen was a barrier to being really able to see what I was doing as I built. I would move a vertex a certain amount and once I rotated the camera, I would realize that I had moved it too much in the x, y or z direction without even noticing.

VR would be useful in taking a tour through the model of a building to a client. But it could also be extremely valuable at the sketching, conceptualizing and designing stages as well. The software would consist of ‘dynamic material’ that the user can manipulate by holding onto and dragging and scaling surfaces, vertexes, edges and volumes, like on maya.

The software would also have two modes, a miniature one where the architect can tinker with the model and change things and an immersive mode. Thinking about the scale of the body is also extremely important in designing architecture. The architect would be able to move from a miniature scale to placing themselves inside the model as they work on it.

There are some elements of a building that VR would not be able to capture as yet, like heat and air currents but VR would be excellent for representing and creating a sense of space.

Visualizing Data in VR

During Bret Victor’s talk, I loved learning about William Playfair and how he invented the bar chart and other graphical methods to represent data. Related to these methods are “explorable explanations,” abstract representations that show how a system works or a way for authors to see what they are authoring without the black box of code.

Data visualizations are a powerful representation that is suited for VR. Though there are some visualizations that have been developed in VR, they usually rely on the game engine to navigate between charts or they will have some irrelevant motion like the bars rising in a bar graph when it is first loaded. I think with VR we can do more to incorporate the different modes of understanding that Bret Victor mentioned. For instance, we can build upon our spatial understanding to understand quantities, time, associations between nodes of information, or even how the charts are organized (like a library of books). We can build upon our aural understanding through having audio explaining the data and walking the user through it at a level specific to the user’s experience.

VR can make an data visualization an interface to information that makes the data accessible and easy to understand through abstraction. However, there is also potential for it to unpack the layers of abstraction and show how the data visualization has been made or even give the context behind the data. For instance, if there is a chart showing the amount of snowfall, could the user be immersed in the environment showing the snowfall and the data visualization of its levels? Data visualizations are a person’s stories of that data, so they are already created in mind with a specific objective for their audience. The trouble with these visualizations is that they tend to dehumanize the context behind that data, so VR really has the ability to use its potential for immersion to help the audience better understand the story. However, it is important for VR to not exploit this potential and to falsify the data through creating a specific immersive experience that causes a different perception of that data. I also think using VR to visualize data relates to the dynamic models that Bret Victor discusses at the end of his talk. Imagine data being updated in real time and seeing how the representation changes: the bar increasing, a point on a line graph being added, etc.

Project 2 Development Blog: Zenboo

Group members: Adham, Cassie, Nico, Vivian

March 3

planning/storyboarding – see Nico’s post

March 10

This past weekend I started to build up the environment. I started out by messing around with the terrain builder, as part of our plan included having the bamboo garden surrounded by mountains. This was relatively straightforward and fun to play with, though there were a few differences from when I went from using my laptop (I have 2017.3) to the PC in the classroom.

When I made a perimeter of mountains, however, placing the camera in the center made the mountains seem a little overbearing. To accommodate this, I placed a platform mountain of sorts in the center, with the camera on top of it. This way, it feels like the user is up in the mountains and gives a much more peaceful effect:

I chose to create somewhat jagged-y mountains because they reminded me of some of the mountains in China – rough yet mystical.

I also played around with the painting feature, and painted on flower details of the platform the user stands on. I got the texture from Grass Flowers Pack Free, and didn’t realize until I placed it in the environment that the flowers actually moved around, like they were swaying in the wind. I’m not sure if we will keep this effect or not, but for now I think it adds a nice peaceful effect, and could possibly be accompanied by calming wind sound effects.

To build up the rest of the environment I relied on other prefabs. To create the circle of rocks surrounding the user, I used rocks from LowPoly Rocks. I got the bamboo from Shuriken Set (which Nico found), the watering can and sickle from Garden Realistic Tools, the tree stump from LowPoly Trees and Rocks, and the skybox from Skybox Series Free. At the moment, this is what one view of the environment looks like:

We’ll have to talk more about how we want the bamboo to be represented and how close it should be to the user. Since there is a circle of rocks, it might make sense for there to also be a circle of bamboo rather than just a section of bamboo in front of the user.

March 11

During class we touched base on more stuff to do for the environment:

  • Waterfall – found something in the assets store for this, Water Fx Particles
  • Make log wider – this way it works with the gravity of the objects on top of it (before the watering can and sickle were falling off of the log for some reason…this was why)
  • Make the bamboo closer to the user – less walking, limited space
  • Prettier bamboo material
  • Not have grass/flowers too high or else when objects fall you can’t see them
  • Have a different color flower – right now it looks like hay

I also started looking around for some music and sound effects we could use. I found some nice sounds of birds chirping and leaves rustling in the wind, as well as a sound that could be good for when the bamboo is being cut.

Later in the evening Nico and I also worked on putting the script he had and the environment. This was really nice because it gave me a better idea of how the space looks and can be better designed when wearing the actual headset. We ended up scaling everything in the environment down so this would be easier to work with in the scripts, and so that the user didn’t have to walk as much. This actually ended up having a nice visual effect as well, since it somehow felt more like a canyon. We also talked about more things to work on with the environment:

  • With the rock circle – make the circle smaller, with less but bigger rocks, and experiment with them floating
  • Take flowers off of the mountains (I had accidentally painted these on, which wasn’t apparent until we scaled everything down)
  • Add material variation in the mountains
  • Add more bumps in the platform mountain the user stands on for terrain variation

March 12

Today I worked on making some of the improvements to the environment:

  • Removed the flowers that were on the background mountains
  • Made the rock circle into larger floating rocks, tilting at different angles and floating at different heights. I actually really like this effect, I think it gives an odd sense of power yet is still zen.
  • Added in more bumps/raised terrain sections around the platform mountain that the user stands on
  • Started experimenting more with the ground…put pinker flowers in the back and short green grass in the circle where the user stands. However, there is a kind of warp in color that happens when the user moves around their head, which doesn’t necessarily cause any issues but it might look slightly out of place. I’ll have to see what the others think.

Here’s what the environment looks like at this point (sans bamboo – will add this in as a group tomorrow):

March 17

These past couple days were spent making finishing touches on the environment. The general consensus on the grass and flowers was that the color change was due to rendering, and was too distracting for the user. I found a blog post online about how to make grass using the tree building tool, but I was having a little trouble getting it to work. I also tried to mess around in the terrain settings, yet this was in vain as well. Eventually, the issue was resolved by adding grass by painting them as trees rather than as a detail – I ended up finding a grass model in a package from the asset store and using this. I also added bushes in the mountains to look kind of like trees to get some variation in the color in the mountains and to look more like those mystical yet peaceful Chinese mountains I was inspired by.

The last thing I worked on was the placement of the bamboo. The space the user is in is a bit limited, so the bamboo was going to have to be placed in a way so that the user did not have to walk much. I ended up placing the bamboo in a semi-circle around the user, so the user can simply turn around to view the other bamboo stalks that are available to interact with. I think this placement also gets the user to turn around and look at the 360 view around them, whereas a clump of bamboo in front of them would simply station their viewpoint in one spot.