Virtual View: building the installation

During the discussion with the hospitals it became clear that I couldn’t just put my stuff in a room and leave it there, especially as the space was open to the public all day. So I had the idea of building a piece of furniture that would act both as a chair and a chest for the hardware. As it seemed rather complex to integrate everything in a foolproof manner, I contacted DIY wizard Aloys.

We discussed the basic requirements and decided on a building a sketch first on which we could improve in a future version. Of the essence was integration of PC, sound system, beamer and heart-rate sensor. It had to be stable and elegant at the same time. The chair also should act as an on-off switch, detecting user presence. Of course time and budget was limited. So Aloys first made a CAT drawing. He also made a cardboard sketch.
CAT drawing of installation
I wanted the operation of the installation to be simple, a one switch interface to switch the complete installation on and off. We managed that using a rod to prod the big switch of the PC, this acted as a primitive key for the staff. Aloys also provided a lock so I could open the chair and get to the hardware and electricity supply if that was needed. The mouse and keyboard were also locked the chair, making it impossible to stop the program without the rod key.
full view of installation
The installation showed a static image of the animation is no one was using it to attract attention, also some wind sounds are heard. The software saves a still every minute so a different image appears after each use. Once a user sits down (detected by a hardware switch using Arduino) she is prompted to attach the clip of the sensor to her earlobe. When the sensor is detected the animation and soundscape starts. The speakers are integrated in the chair and create a very spacious and lifelike sound. This creates a strong sense of presence. Users can stay and enjoy the installation for as long as they like.
When they get up the animation freezes and the sounds mute except for the soft wind sounds.

animation from user perspective

Most people found the experience relaxing and enjoyable. Some software issues emerged that I’m solving now. The chair was not very comfortable so that is something we will work on in the next version. Users weren’t very clear how the heart-rate was visualised. I’m improving that, creating more links between the audiovisuals and the physiological data without distorting the landscape feeling.
I also want the next version to be more mobile. That way I can easily take it for a demonstration.

Virtual View: programming animation

I’m still working hard on my animation. It’s going a bit slower then anticipated (what else is new) but I’m confident that I’ll have a nice, representative animation finished for the experiment. As an inventory, these are the elements that I want in the testing (and probably) final landscape: horizon with hills, sky, water body, shoreline, trees on the hills. And the animation elements: clouds, individual birds and flocks of birds, butterfly, bee, leafs blowing, ripples on the water. Forces I’m working with now are wind and gravity but I might include some more to make for example the water ripples move naturally.
So far I’ve build the look and feel of the landscape, tweaking it a little here and there as I go along. I’m very happy with the clouds. They consists of a lot of circles positioned using the Perlin noise algorithm. I’ve got big ones at the top and smaller ones a bit lower.

Some frames of clouds moving

Some frames of clouds moving

I’ve brought down the number of hills visible as I think too many lines make a chaotic landscape which gives a restless feeling. The gradients for the sky and the water surface are the same, that just is more logical.
I’ve also included a shoreline to account for the appearance of the blossom leafs and butterflies.
I finally managed to give the blowing pink blossom leafs a natural look. It was quiet a challenge to make them rotate and move in the joyful and fascinating way leafs do.

Some frames of blossom animation

Some frames of blossom animation

Next step will be to continue with the water ripple animation and the birds. Finally I will be working on the trees on the hills. All elements will be kept as simple as possible. The movement tell most of the story not the resemblance.

At this moment I can start animation elements at will. Which is nice for constructing a story. I can use it for the experiments with the prototype as well to test the effect of certain animating elements. But eventually the animations should start depending on heart-rate variables. That’s what I’ll have to find out when experimenting with the prototype.

Virtual View: developing animation

The past month I’ve been working on my landscape animation. By chance I discovered a great book by Daniel Shiffman called The nature of code. The book explains how to convert natural forces into code. I’m working through the book picking the forces and algorithms that suit my needs. So far the noise function in Processing has proven very useful. It allows for creating more natural variation (as opposed to the random function.) I use it in creating the landscape horizons and some forms of animation.

PerlinNoiseHills

Test for creating hills with Perlin noise

In a previous post I described how I calculated the colours used in a woodblock print from Hokusai. Since then I have discovered the colorlib library. A super fast library for creating palettes and gradients from existing pictures. You can sort colours and manipulate the palette using various methods. This means I can change my colours dynamically depending on user input.

Colorlib palette from Hokusai picture. Sorted on the colour green.

Colorlib palette from Hokusai picture. Sorted on the colour green.

Apart from working through the book and creating basic animations I’m working on the look and feel of the landscape.

As I explained earlier this is based on the work of Hokusai. To my delight I discovered that a colleague is one of the few Dutch experts on Japanese woodblock printing, having received training in Japan. On top of that Jacomijn den Engelsen is also an artist whom I’ve admired for years. I met with her yesterday in her studio to learn more about this fascinating technique.

Jacomijn

Jacomijn demonstrating the Japanese woodblock printing technique.

The characteristic look of the pieces comes from the use of water based paint on wet rice paper. For every colour a separate woodblock is used. The typical black outlines are also printed from a separate block.

Screen print from animation. Colorlib gradient used for sky and water.

Screen print from animation. Colorlib gradient used for sky and water.

The prints have a very flat, 2D feel. That is what I like, it is a kind of primitive picture of a landscape. The view people will be seeing won’t be a 3D simulation of nature but an artistic representation, a work of art with healing properties.

I’m not a painter or draughtsman so I was very happy with the tips Jacomijn gave me on how to make the landscape more convincing while still keeping the ‘Japanese flatness’.

Virtual View: animation theory

The last weeks I’ve been working on designing and researching my third experiment. The next step will be to introduce animation and to study its effects. I was curious to see if there had already been research into the effect of different types of animation on stress reduction. Rather to my surprise I couldn’t find anything. It was hard to find any articles on animation what so ever…

My starting point was neurocinematics and psychocinematics. New fields of research on cognitive function during movie viewing. Attention is an important subject here. Then I found a journal dedicated to animation and found some very interesting information on the nature of animation and links between Eastern philosophy and religion and Japanese anime animation. That was very interesting for me. This way I can combine the visuals of Virtual View with Zen mediation and Buddhism, which I have been practising for almost 20 years. I realize now that my forests experiences on which Virtual View are based are rooted in my meditation practice. This is also what I want to convey with this installation.

In the next part I’ll summarise my findings and explain how I will test them in the animations I will make for my next experiment.

Even though the book chapter by Carroll and Seeley (draft version) is about Hollywood cinema it shed light on some aspects of my research. Because I want the users of Virtual View to have a relaxing and restorative experience attention is very important. How do I keep my users softly fascinated? Hollywood films capture attention by giving the viewers only just enough information using stylistic conventions. They also use variable framing: different techniques like camera movements and zooms to direct our attention. The theatre design with the big screen and darkened surroundings helps to minimize cognitive load. My interpretation of these thoughts is that a certain amount of abstraction can heighten attention. As can “camera” movement. These are things to play with. The actual installation should be set up to avoid distractions.

What interested me in the article by Torre is that animation can be expressive of itself. Motion can be transferred from one object to another to create surprising results and again, capture attention. As animation can be layered movement and transformation will have a cumulative effect and make anything possible in the way the impossible is possible in dreams. It will be nice to experiment with non-realistic events in the Virtual View animation.

The articles by Chow were a real revelation to me. His ideas of types of liveliness and holistic animacy fit perfectly with what I had in mind with Virtual View and what should be happening on the screen. For him primary liveliness is goal oriented and can been seen in for example Disney animations where a character causes all kinds of events. Secondary liveliness is unintentional and emergent. The sort of movement that can be seen in nature: the swarming of birds, waving of trees but also growth and shape changing. Where primary liveliness focuses our attention, secondary liveliness dilutes it, capturing our attention in a soft way. For Chow this wonder is linked to the ideas of Daoism and the concept of kami in Shintoism. They both promote respect for and connection with nature. He explains liveliness in computer graphics. Techniques like morphing, looping and Boids are good representatives of secondary liveliness. The eastern animation style called anime is also has good examples of this kind of liveliness. He shows some very nice examples of computer animation:


The Nintendo DS game Electroplankton

Chow also refers to old, Chinese maps of which the design is of the landscape elements was tightly regulated by rules. Rules are there to be bend so artists turned the maps into multi-perspective narratives. The maps are just beautiful, clear and mysterious at the same time.

Needham, J. (1962[1959]) from Science and Civilisation in China

Needham, J. (1962[1959]) from Science and Civilisation in China

Chow goes on to link animation with interactivity by looking at some pre-cinematic technologies. One of them is the handscroll, an ancient form of Chinese painting of which Along the River During the Qingming Festival is an example. When someone looks at a handscroll painting they interact with the picture emulating what we now call a camera pan. This way they include both time and space into the painting. I’m now considering capturing head movements as added interactivity to Virtual View. The animation will pan in the direction the head moves. This way people can expand they view and have richer, more varied experience.


The installation Along the River During the Qingming Festival

The last inspiration comes from an in depth article by Bigelow on the Japanese animation artist Miyazaki. She states that Miyazaki creates an aesthetic experience “… that invokes a Zen-Shinto pre-reflective consciousness of the inter- relation of the human with the tool and nature. It is a way of perceiving change in stillness…”. Bigelow sees a parallel between the state of mind of the artist and the state of selfless emptiness as it is described in Zen Buddhism and Shinto. It can create a state of wonder because this empty mind precedes concepts and naming. Miyazaki also expresses in his films the idea of the Shinto notion of kami in which all things have life spirit. This way of looking at reality makes way for a dimension of mystery and wonder to be discovered in nature.

Japanese anime is rooted in the art of woodblock printing of which I am a great fan. Miyazaki’s work is not photo-realistic but tries to capture the essence of reality that expresses interconnectedness. These things can, in his view, be lost in virtual reality, as it is often very technical and an industrialised method.

In my Virtual View animation I would like to evoke a sense of wonder by offering a non-photorealistic view that is lively in a way that reminds of real nature. I don’t want to replicate nature the way it is done in 3D virtual reality. It always seems dead to me, after reading these articles I understand why. The aesthetics will come from eastern art, which I love. The view will be a lively tableau with different kinds of computed animations which have there origin in natural phenomena. I will introduces panning to add extra space and “time” to the animation and to be able to add more, conscious interactivity in the prototyping stage. I will use these starting points to create a video of the animation which I will test in my next experiment. A description of that will appear soon on this blog.

Sources

Noel Carroll & William P Seeley. (2013) Cognitivism, Psychology, and Neuroscience,: Movies as Attentional Engines. Psychocinematics: The Aesthetic Science of Movies (draft copy).
Torre, D. Cognitive Animation Theory: A Process-Based Reading of Animation and Human Cognition. Animation: an interdisciplinary journal 9(1) p. 47-64
Chow, Kenny Ka-nin. The Spiritual—Functional Loop: Animation Redefined in the Digital Age Animation, March 2009; vol. 4, 1: pp. 77-89.
Bigelow, S. J. Technologies of Perception: Miyazaki in Theory and Practice. Animation March 2009 vol. 4 no. 1 55-75.
Chow, K. K. Toward Holistic Animacy: Digital Animated Phenomena echoing East Asian Thoughts. Animation July 2012 vol. 7 no. 2 175-187.
Shimamura, A. P. (2013). Presenting and analyzing movie stimuli for psychocinematic research. Tutorials in Quantitative Methods in Psychology, 9, 1-5

performance 11-5

Last Friday was the première of the breathing_time performance. I was very, very nervous. So nervous that I forgot to start the sound software… But the animation was beautiful and the data came through very well. One participant wasn’t present, I don’t know what went wrong, he came online at 19:15 as you can see from the logs below.

The logs show very nicely how the breathing patterns of all participants differ:

These visualisations are from the sense-os website, the timeline view. Here are some of the drawingBreath visuals:

As an encore I did a little session with sound by myself.

It’s alive!

I’ve almost got the whole application working. Yesterday I managed to animate the map and the heart icon. Because I keep moving around (and the GPS logger isn’t always exact) the map is continually shifting a little. Together with the moving heart-beat ‘snake’ you really get the feeling this work is alive. And of course the work is about life and the way it’s changing constantly. So I’m rather pleased with the way it’s turning out. Watch a little video about the living map (a Windows media file.)

Animated

Yesterday I continued working on the heart-beat graph. I’ve managed to animate it. The speed of the animations varies with your cursor position. The speed ranges from 1000 milliseconds to 1 millisecond. When the animation goes faster the string of dots seems to be alive, like some snake like creature. Fascinating to watch.

The dots are 15 pixels apart. For my test file with 20 hours of data the heart-beat graph has a width of 107374 pixels! Flash is pretty powerful to caculate this image in less than a second, amazing.

As for the data collection. I’ve slept a night with the belt fastened by broad strips of medical tape. It worked till 7 am, which is an improvement but still not perfect. I’ve posted the question at a user group and hope to get some tips for improving this.

ps. My lowest heart-rate was 34 bpm this night! The lowest I ever measured.