Soft Meditation, first prototype

For the past couple of weeks I have been working on the first version of my Soft Meditation piece. This is a performance in which I meditate and live data is transformed into an animated, artistic visualisation.

Soft Meditation Performance photograph by Kristin Neidlinger

Soft Meditation Performance photograph by Kristin Neidlinger

Background

For the past year I have been developing, together with a team, Meditation Lab Experimenter Kit. A tool-kit that consists of a suit with sensors and software which allows you to monitor and optimise you meditation practice through self-experimentation and interaction with the environment.
Soft Meditation is the first application made with this tool-kit. It uses the API to create generic imagery from live sensor data collected with the suit. My aim is to explore whether donating personal data can create a positive, meditative effect in others even though they aren’t meditating themselves.

Why soft?

The title of the performance refers to the environmental psychology term soft fascination coined by Kaplan and Kaplan as part of their attention restoration theory. In my own words: the theory describes how looking at natural phenomena like waves on the water captures your attention without causing any cognitive strain. That way the mind can restore and refresh. Meditation is all about attention and I am looking for an easy way to capture the visitors attention and take them to a place of calm.
Trying to do this with meditation is, despite of popular belief, quite hard work. So soft also refers to the gentle and playful way in which, I hope, a meditative state of mind is achieved.

Inspiration For Soft Meditation

Inspiration For Soft Meditation

How soft?

But how do I capture attention in a way that is calming and uplifting? I’ve read some articles (view references below) about the affective properties of motion graphics and compiled an inventory of effects. For my goal it would be best to use slow, linear motion from left to right. I could then play with speed and waviness to create more intensity and interest depending on the sensor data in direct input.

Prototype design

For years I’ve been thinking about expressing my inner meditation state through a water metaphor. Movement of water is endlessly fascinating and mysterious and to my mind perfectly suited for my intentions. I looked for inspiration online which set the boundaries for which software environment to choose.
After exploring various platforms, languages and libraries I ended up with good old Processing as a platform. I found this sketch online which offered a nice starting point to build on. I started modifying it.

Exploring the Box Waves Processing sketch

Exploring the Box Waves Processing sketch


Considering I wanted a complex and lively wave animation I choose pitch (nodding movement of the head), breathing (top and bottom), finger pressure and heart-rate as input sensors.
SoftMed Prototype

Interaction with the audience

I have been thinking about how to make the performance multi-directional. I wanted to somehow include the audience into what is happening on the screen. What both me and the audience share are the sounds in the room. I decided to use the marker button provide with the suit to change the animation speed depending on the loudness of the sounds. Over time the audience would notice the relation between speed and sounds was my idea.

The first performance

I was invited to give a short presentation at the Human-Technology Relations: Postphenomenology and Philosophy of Technology conference at the University of Twente. Instead of a talk I decided I would test my prototype. I could only last for 5 minutes. I had programmed the sound of a bell at the beginning and end. I was facing the wall while the audience looked at a big screen over my head.


I was a bit nervous on how it would be to meditate in front of some 30 strangers. But once I sat down it was just like I always do: notice my body (pounding heart) and mind.

I was less pleased with the demo effect. One sensor was not working properly (I still don’t know why). This created hard-edged shapes and motions from right to left the exact opposite of the intended animation.

I tried pressing the marker button when I heard something. But as the performance progressed the room became more and more silent. Which I suppose is a sign that it worked but not something I had counted on.

Measurements

I am of course interested in the effects of the performance. I supplied the audience with the Brief Mood Introspection Scale (BMIS). Four sub-scores can be computed from the BMIS: Pleasant Unpleasant, Arousal-Calm, Positive-Tired and Negative-Relaxed Mood. I asked to fill them in before (baseline) and after the performance. 10 questionnaires were returned of which 6 were complete and correct. I am working on the results and will report on them in a later post.

Reactions

I was pleased to hear that people were fascinated by the wave and tried to work out what it signified. People found the performance interesting and aesthetically pleasing. We discussed what caused the effects: the context, the staging of me sitting there and people wanting to comply, the animation or the silence? A lot of things to explore further!
One participant came up to me later and explained how much impact the performance had on him. He found it very calming. “Everything just dropped from me” he explained. It also made him think about silence in his life and looking inward more. This is all I can hope to achieve. I continue my research with new energy and inspiration.

The next version of the performance will be on show during the biggest knowledge festival of south Netherlands (het grootste kennisfestival van zuidnederland) in Breda on September 13th.

References
- Feng, Chao & Bartram, Lyn & Gromala, Diane. (2016). Beyond Data: Abstract Motionscapes as Affective Visualization. Leonardo. 50. 10.1162/LEON_a_01229.
- Lockyer, Matt, Bartram Lyn. (2012). Affective motion textures. Computers & Graphics
- K Piff, Paul & Dietze, Pia & Feinberg, Matthew & Stancato, Daniel & Keltner, Dacher. (2015). Awe, the Small Self, and Prosocial Behavior. Journal of personality and social psychology. 108. 883-899. 10.1037/pspi0000018.

about breathing_time

For the TIK festival documentation I wrote an article about breathing_time:

Background and concept

Breathing_time was conceived as part of the Time Inventors Kabinet[1] project for which I was an invited artist. The idea behind this project was to use different ecological input for creating new notions of time. Right from the start I had the idea to work with physiological data as input for a new time. Can we make time more personal if it is driven by our own body? Can we change our perceptions of time through growing awareness of the way our body functions? These were thoughts that motivated the work.

The concept of the windclock[2] was a central theme in the TIK project the most obvious physiological data to work with was breathing.

Early on in the project I had the idea of representing this personal data in a direct way using analogue techniques like drawing. I experimented a lot with ink and stains and made a hand driven drawing machine that drew a line of various thicknesses depending on the speed of breathing. I drew inspiration from Japanese calligraphy techniques, especially ensō[3]. While the idea of ink stayed it changed from analogue to digital: an animation with sound to represents the breath flow.

I wanted to work with a group of five people breathing at the same time and explore if becoming aware of someone else’s breathing pattern would influence your own and if we could reach a certain entrainment, our own rhythm. This resulted in two performances performed at the TIK festival.

Hardware

I build a custom device, the breathCatcher, using the JeeLabs RBBB Arduino[4] and the Modern Device Windsensor[5] and USB Bub[6]. The device is cone shaped to capture the breath flow in both directions. The wind sensor is placed in the opening of the cone. The cone should be worn over the nose and mouth. Breathing in and out through the nose is required. A felt ring protects the face from the sharp paper edge. A felt container at the bottom holds and protects the microcontroller. The paper device is connected to a PC by a cable using a USB-to-serial connection.

Sensor platform

For working with the sensor data I used the CommonSense platform[7]. I was sponsored by the Sense-os, the creators of that platform. CommonSense is an advanced online platform with comprehensive API[8] for working with sensor data. After creating an account you can create sensors, five in my case, and upload to and download data from the server. Different queries are possible and basic visualisation is available. That comes in very handy when you are developing.

I received a lot of help from Sense-os with connecting to the API and querying the database. All data is exchanged in JSON format which is very particular about quotes, which made it hard to work with.

For them the challenge lay in the near real time service of sending and receiving five times ten data points per second. I was advised to use a cable instead of Wifi to ensure minimal data loss.

Software

I wrote custom software, drawingBreath, in Processing[9]. I used some native Java and a few extra libraries and classes.[10] This software performs all the connections with the CommonSense API. It uses several timers to keep the tasks of sending and receiving data separated.

For 60 seconds the software calibrates all five devices so as to be able to detect the direction of the breath flow. Using the temperature sensor was very useful for that purpose.

After the breath flow has been calibrated the animation starts. Each of the five participants is represented by a ‘brush tip’ which will start to draw a circle. Going counter clockwise in red represents breathing in, the blue dot moving clockwise represents breathing out. The radius of the circle is determined by the strength of the breath flow as is the size of the tip and its’ colour intensity. In between breaths the drawing clears to start again.

Other software used in, and in aid of this project was Csound, Skype, Dropbox (view below) and NTP[11]. The latter was very important as the timestamp for every breath data point should be the same.

Adding sound

My friend Richard van Bemmelen, a musician and programmer kindly offered to help me add sound to the animation. My idea was to create a bamboo wind chime with our breaths. Creating a sound only when the breath status changed from in to out or vice versa. Richard is an advanced user of Csound[12] and wanted to use that program. As bamboo already exists as an Opcode[13] we could quickly start. The sound produced by Csound wasn’t the rattle of sticks but a far more beautiful flute-like sight. The pitch depends on the value of the breath flow data. To make everything work on all the participants’ PCs Csound had to be installed. A custom .csd file which defines the settings for the synthesizer is placed in that folder. To make starting of the sound part easy Richard created a batch file that would start Csound and make it wait for messages from Processing. For communicating with Csound the oscP5 library[14] was used in Processing. A message with the breath value was send whenever the breath status changed.

The performances

breathing_time was a networked performance. I’ve selected five people from different nationalities to partake in the experiment. With that I wanted to underline the universal character of breathing. From five different locations these five people would create sound and visuals using only their breath. Because of the drawingBreath software all participants saw the same animation and heard the same sounds. This output could act as feedback for them. I was in Brussels performing for an audience that saw and heard the same things as the participants.

One thing that took a lot more effort then anticipated was preparing the participants for the actual performances. To test the server and different versions of the software we had planned four test sessions at the start. But first all software had to be installed on the different computers. Right at the beginning I had to move everybody to the Windows platform as running the Processing application made on a Windows PC on a Mac appeared to be a hassle. Also the drivers for the USB Bub were absent for the Mac.

Having equipped two participants with my old laptops we could start testing. The Sense-os server did a very good job. The main problem was instructing everybody and making sure that the software and Csound updates were put in the right folders. I used Dropbox[15] to supply updates and manuals but even that was hard for some people. Through Skype I gave live instructions and could answer questions of all participants at the same time. After a good final rehearsal it was time for the real thing.

The performances started with each participant introducing him/herself in a pre-recorded sound file in both their mother tongue and English. At exactly 19:00 hours everybody would start their drawingBreath program and calibration started as the introductions continued.

Our assignment for the performances was: relax and breath naturally. Try to detect your own breath circle and see if you can leave some time between each breath. If this moment of in between breaths would coincide the screen would be cleared and we would have reached some sort of communal breathing.

The most important thing I learned from the performances is that breathing is a very personal thing that isn’t easily manipulated. This shows very well from the CommonSense logs where you can see the breathing pattern almost as a signature.[16] Our breathing gaps didn’t coincide but the different movements of the breath flows was interesting to watch.

I also realised that although the performances went reasonably well this is just the beginning. There are so many things that could be improved for which I just lacked the time. Enthusiastic reactions have brought to me new ideas of working with the concept. I’m considering creating an online community to improve the hard- and software. To breath together online and explore the idea of creating a communal “breathing time” further.

Specifications

drawingBreath software (Processing & Java), breathCatcher hardware (Arduino RBBB, Modern Device Wind sensor, USB Bub, USB cable, paper, felt, elastic band), sensor platform (CommonSense API), sound (Csound & Processing)

Credits

Concept, design, development & programming: Danielle Roberts

Sound: Richard van Bemmelen

CommonSense API: Sense-os

Participants: Adriana Osorio Castrillon, Lorenzo Brandli, Mieke van den Hende, Tomoko Baba

Location: Imal, Brussels

Also made possible by OKNO

Blog: http://www.numuseum.nl/blog/category/breathing_time/



[1] http://timeinventorskabinet.org/

[2] http://www.timeinventorskabinet.org/wiki/doku.php/windclocks

[3] en.wikipedia.org/wiki/Ensō

[4] http://jeelabs.com/products/rbbb

[5] http://shop.moderndevice.com/products/wind-sensor

[6] http://jeelabs.com/products/usb-bub

[7] http://www.sense-os.nl/commonsense

[8] http://www.sense-os.nl/api-console

[9] http://processing.org/

[10] Processing serial and net, guicomponents GTimer class, org.json and Java.net.URL and URLConnection classes

[11] http://www.meinberg.de/english/sw/index.htm

[12] http://www.csounds.com/

[13] http://www.csounds.com/manual/html/bamboo.html

[14] http://www.sojamo.de/libraries/oscP5/

[15] www.dropbox.com

[16] http://www.numuseum.nl/blog/2012/05/11/performance-11-5/

performance 12-5

The second performance at the TIK festival was very different from the first. The sound was on and everybody was present, according to the logs. But the animation wasn’t as nice. I realised later that this was due to poor data throughput. An installation was running that took up a lot of bandwidth at times. Not all the breath flows were visible. But it was still worthwhile I suppose judging from this nice picture by Annemie Maes:

I realised after both performances that this is only the start. I managed in a relatively short time to tackle all major hurdles but there’s a lot to be improved and added. I understood from the participants and the audience that they find it exciting to breath and create something together. So my idea of bringing people together through breath seems to work. I’d like to explore this further and I’m considering turning this into an open source project and develop a kit that people can work with so they can joint the community of breathers ;-)

The logs show even more differentiation then during the first performance:

performance 11-5

Last Friday was the première of the breathing_time performance. I was very, very nervous. So nervous that I forgot to start the sound software… But the animation was beautiful and the data came through very well. One participant wasn’t present, I don’t know what went wrong, he came online at 19:15 as you can see from the logs below.

The logs show very nicely how the breathing patterns of all participants differ:

These visualisations are from the sense-os website, the timeline view. Here are some of the drawingBreath visuals:

As an encore I did a little session with sound by myself.