breathing_time at the Quantified Self conference

On May 12th I lead a breakout session at the second European quantified self conference in Amsterdam. The goal was to exchange experiences in breath and group tracking and to demo the new, wireless version of the breathing_time concept.

I started the breakout with an overview of the previous version. We soon got into a discussion on how hard it was to control your breathing rate. One participant used an Emwave device to try and slow down his breath rate. He could never quite make the target and therefore could never reach heart coherence which is frustrating. In my view the way to go is to become more and more aware of your breathing without intentionally wanting to change it. I went from chronic hyperventilation to an average breath rate of 4 times per minute without trying. Doing daily Zen meditation for lots of years has done it for me.

As usual people saw some interesting applications for the device I hadn’t thought of like working with patient groups. Another nice suggestion was to try out the placebo effect of just wearing the cone.

When it was time for the demo people could pick up one of the breathCatchers:

I’d managed to finish four wireless wearables. Working on 12 volt batteries with an Xbee module and an Arduino Fio for transmitting the data.

After some exploration we did two short breathing sessions so we could compare. The first was to just sit in a relaxed way and not really pay attention to the breathing (purple line). The second was to really focus on the breathing (grey line). The graph below shows the results:

Participants could look at the visual feedback but I noticed most closed their eyes to be able to concentrate better.

The last experiment was the unified visualisation of four participants. I asked them to pay close attention to the visualisation which represented the data as four concentric circles. A moving dot indicates breathing speed and moves depending on the breath flow.

It was fascinating to watch as the dots were moving simultaneously a lot of the time. However when asked how this session was experienced most participants saw the exercise as a game and were trying to overtake each other. They used “breath as a joystick”, to quote one of them. This was not my intention, the focus should be on the unifying aspect. I got some nice suggestions on how to achieve this: give more specific instructions and adapt the visuals to split the personal and communal data.

All in all we had a very good time exploring respiration and I’m grateful to all of the participants for their enthusiasm and valuable feedback.

xbee hello world!

Today I’ve had my first success with communicating between two Xbees. Mostly thanks to this simple but clear tutorial. After installing the XCTU software with Thijs a few months back I had forgotten quite a lot of his private class “introduction to Xbee”. From his instructions I ordered 7 Xbee antenna’s and one Xbee explorer USB. My goal is to make an Xbee network without using Arduinos with the Xbees. This apparently is possible. But before getting to that point I had to make an Xbee “hello world” to grasp the concept and get the basics right.

In this picture you see the a light sensor attached to an Arduino and Xbee antenna. The Arduino prints the measurements to the serial port. Through the TX and RX pin the Arduino is connected to the Xbee antenna. This sends data to the other Xbee antenna that acts as a receiver. The data is printed in red in the XCTU terminal on the right.

On to the next step: running the Xbee on a battery and programming the Xbee pins to read and send the wind sensor data. To be continued…

non-woven wearable

Because I’m extending the breathing_time project into a workshop I’m doing some research on non-woven materials to make the cones from. The first version was made of paper and felt. It looked very nice but wasn’t very practical. Paper folds and crumbles easily. The felt on the face gets dirty and as it was glued to the paper I couldn’t replace it. The department of wearable senses from the TU/e kindly gave me some samples to experiment with. These are the results:

Lantor producer of all sorts of non-woven materials:

To start of with the best one. This is a thin, black non-woven. It’s very easy to work with. Can be glued with just ordinary collall glue. It sticks very well. The ease of working reminds one of paper. It has some nice extras. You can use sticky tape on it and you can remove that tape without leaving a trace, even after a few days:

This is very useful, it allows me to make a replaceable, protective edge. The bare edge is too sharp on the face. You can also glue two layers on top of each other to make the cone more firm. This has a very stylish appearance:

You can just use scissors to cut out the shape. And it doesn’t tear like paper. So attaching the strap is no problem.

I also tried another non-woven by Lantor. It has a felt like appearance. Very nice but it is too floppy for my purpose and quite hard to glue:

Colbond producer of all sorts of non-woven materials:

This semi transparent, thermally bounded non-woven has a very appealing look. It is stiff, even a bit sharp on the edges. I was really looking forward to trying this out but the result was a bit disappointing. It was hard to glue due to its’ open structure. It also turned out to be very brittle. A fold doesn’t go away (view right end). In that sense it is worse then paper. As I will be reusing these cones with different people they have to stay clean and in shape. This one didn’t stand that test.

about breathing_time

For the TIK festival documentation I wrote an article about breathing_time:

Background and concept

Breathing_time was conceived as part of the Time Inventors Kabinet[1] project for which I was an invited artist. The idea behind this project was to use different ecological input for creating new notions of time. Right from the start I had the idea to work with physiological data as input for a new time. Can we make time more personal if it is driven by our own body? Can we change our perceptions of time through growing awareness of the way our body functions? These were thoughts that motivated the work.

The concept of the windclock[2] was a central theme in the TIK project the most obvious physiological data to work with was breathing.

Early on in the project I had the idea of representing this personal data in a direct way using analogue techniques like drawing. I experimented a lot with ink and stains and made a hand driven drawing machine that drew a line of various thicknesses depending on the speed of breathing. I drew inspiration from Japanese calligraphy techniques, especially ensō[3]. While the idea of ink stayed it changed from analogue to digital: an animation with sound to represents the breath flow.

I wanted to work with a group of five people breathing at the same time and explore if becoming aware of someone else’s breathing pattern would influence your own and if we could reach a certain entrainment, our own rhythm. This resulted in two performances performed at the TIK festival.


I build a custom device, the breathCatcher, using the JeeLabs RBBB Arduino[4] and the Modern Device Windsensor[5] and USB Bub[6]. The device is cone shaped to capture the breath flow in both directions. The wind sensor is placed in the opening of the cone. The cone should be worn over the nose and mouth. Breathing in and out through the nose is required. A felt ring protects the face from the sharp paper edge. A felt container at the bottom holds and protects the microcontroller. The paper device is connected to a PC by a cable using a USB-to-serial connection.

Sensor platform

For working with the sensor data I used the CommonSense platform[7]. I was sponsored by the Sense-os, the creators of that platform. CommonSense is an advanced online platform with comprehensive API[8] for working with sensor data. After creating an account you can create sensors, five in my case, and upload to and download data from the server. Different queries are possible and basic visualisation is available. That comes in very handy when you are developing.

I received a lot of help from Sense-os with connecting to the API and querying the database. All data is exchanged in JSON format which is very particular about quotes, which made it hard to work with.

For them the challenge lay in the near real time service of sending and receiving five times ten data points per second. I was advised to use a cable instead of Wifi to ensure minimal data loss.


I wrote custom software, drawingBreath, in Processing[9]. I used some native Java and a few extra libraries and classes.[10] This software performs all the connections with the CommonSense API. It uses several timers to keep the tasks of sending and receiving data separated.

For 60 seconds the software calibrates all five devices so as to be able to detect the direction of the breath flow. Using the temperature sensor was very useful for that purpose.

After the breath flow has been calibrated the animation starts. Each of the five participants is represented by a ‘brush tip’ which will start to draw a circle. Going counter clockwise in red represents breathing in, the blue dot moving clockwise represents breathing out. The radius of the circle is determined by the strength of the breath flow as is the size of the tip and its’ colour intensity. In between breaths the drawing clears to start again.

Other software used in, and in aid of this project was Csound, Skype, Dropbox (view below) and NTP[11]. The latter was very important as the timestamp for every breath data point should be the same.

Adding sound

My friend Richard van Bemmelen, a musician and programmer kindly offered to help me add sound to the animation. My idea was to create a bamboo wind chime with our breaths. Creating a sound only when the breath status changed from in to out or vice versa. Richard is an advanced user of Csound[12] and wanted to use that program. As bamboo already exists as an Opcode[13] we could quickly start. The sound produced by Csound wasn’t the rattle of sticks but a far more beautiful flute-like sight. The pitch depends on the value of the breath flow data. To make everything work on all the participants’ PCs Csound had to be installed. A custom .csd file which defines the settings for the synthesizer is placed in that folder. To make starting of the sound part easy Richard created a batch file that would start Csound and make it wait for messages from Processing. For communicating with Csound the oscP5 library[14] was used in Processing. A message with the breath value was send whenever the breath status changed.

The performances

breathing_time was a networked performance. I’ve selected five people from different nationalities to partake in the experiment. With that I wanted to underline the universal character of breathing. From five different locations these five people would create sound and visuals using only their breath. Because of the drawingBreath software all participants saw the same animation and heard the same sounds. This output could act as feedback for them. I was in Brussels performing for an audience that saw and heard the same things as the participants.

One thing that took a lot more effort then anticipated was preparing the participants for the actual performances. To test the server and different versions of the software we had planned four test sessions at the start. But first all software had to be installed on the different computers. Right at the beginning I had to move everybody to the Windows platform as running the Processing application made on a Windows PC on a Mac appeared to be a hassle. Also the drivers for the USB Bub were absent for the Mac.

Having equipped two participants with my old laptops we could start testing. The Sense-os server did a very good job. The main problem was instructing everybody and making sure that the software and Csound updates were put in the right folders. I used Dropbox[15] to supply updates and manuals but even that was hard for some people. Through Skype I gave live instructions and could answer questions of all participants at the same time. After a good final rehearsal it was time for the real thing.

The performances started with each participant introducing him/herself in a pre-recorded sound file in both their mother tongue and English. At exactly 19:00 hours everybody would start their drawingBreath program and calibration started as the introductions continued.

Our assignment for the performances was: relax and breath naturally. Try to detect your own breath circle and see if you can leave some time between each breath. If this moment of in between breaths would coincide the screen would be cleared and we would have reached some sort of communal breathing.

The most important thing I learned from the performances is that breathing is a very personal thing that isn’t easily manipulated. This shows very well from the CommonSense logs where you can see the breathing pattern almost as a signature.[16] Our breathing gaps didn’t coincide but the different movements of the breath flows was interesting to watch.

I also realised that although the performances went reasonably well this is just the beginning. There are so many things that could be improved for which I just lacked the time. Enthusiastic reactions have brought to me new ideas of working with the concept. I’m considering creating an online community to improve the hard- and software. To breath together online and explore the idea of creating a communal “breathing time” further.


drawingBreath software (Processing & Java), breathCatcher hardware (Arduino RBBB, Modern Device Wind sensor, USB Bub, USB cable, paper, felt, elastic band), sensor platform (CommonSense API), sound (Csound & Processing)


Concept, design, development & programming: Danielle Roberts

Sound: Richard van Bemmelen

CommonSense API: Sense-os

Participants: Adriana Osorio Castrillon, Lorenzo Brandli, Mieke van den Hende, Tomoko Baba

Location: Imal, Brussels

Also made possible by OKNO











[10] Processing serial and net, guicomponents GTimer class, org.json and and URLConnection classes







performance 12-5

The second performance at the TIK festival was very different from the first. The sound was on and everybody was present, according to the logs. But the animation wasn’t as nice. I realised later that this was due to poor data throughput. An installation was running that took up a lot of bandwidth at times. Not all the breath flows were visible. But it was still worthwhile I suppose judging from this nice picture by Annemie Maes:

I realised after both performances that this is only the start. I managed in a relatively short time to tackle all major hurdles but there’s a lot to be improved and added. I understood from the participants and the audience that they find it exciting to breath and create something together. So my idea of bringing people together through breath seems to work. I’d like to explore this further and I’m considering turning this into an open source project and develop a kit that people can work with so they can joint the community of breathers ;-)

The logs show even more differentiation then during the first performance:

performance 11-5

Last Friday was the première of the breathing_time performance. I was very, very nervous. So nervous that I forgot to start the sound software… But the animation was beautiful and the data came through very well. One participant wasn’t present, I don’t know what went wrong, he came online at 19:15 as you can see from the logs below.

The logs show very nicely how the breathing patterns of all participants differ:

These visualisations are from the sense-os website, the timeline view. Here are some of the drawingBreath visuals:

As an encore I did a little session with sound by myself.

test session

I’ve been working like mad for last couple of weeks to get the ‘drawingBreath’ software going. Main issues:

  • working with the sense-os API, more specificly formatting the string to be send to and retrieved from the server
  • getting the custom software to work on the various PCs
  • making the software work for five sensors in stead of one

From the above you can tell that I’m just an artist struggling to program without proper education. But I have learned a lot again especially about JSON in Java and iterations. And was happy I finished my two Java courses, at least now I had a good idea of what I was doing. The software can now do the following:

  • Login to the sense-os platform and get a session id
  • List all the ids of the 5 sensors
  • Read data from the serial port
  • Format (JSON) and send that data with a time stamp
  • Retrieve the data from all 5 sensors
  • Calibrate all 5 sensors
  • Make a drawing for every sensor
  • Make sounds for every sensor
  • The different tasks are all conducted by separate timers

I only want to fine tune the drawing and the speed of the drawing but for the most part it’s finished(!).

I’ve conducted some test sessions with a smaller group but yesterday evening was the first time there was four of us. It went surprisingly well. No problems with the server, it was a bit unreliable lately. And the visual and audio results were promising:

hardware progress

I’ve been working with plastic and paper to create different device prototypes. I was very happy with look of the plastic prototype. I want the design to be light:

The part I like most is where I stitched the plastic together with nylon thread:

But when I did 30 minutes of meditation wearing the device I nearly suffered from hyperventilation. It was so stuffy. Something that was less obvious when I worked with the paper prototypes. Also the plastic became steamed very quickly which points to the greenhouse effect it houses.

So I’ve switched back to paper. That has the advantage that it works a lot quicker and that you can glue the parts. Also the data is a lot more stable over time. The only thing that still needs looking into is how to make the edge of the cone comfortable. I’ve used the soft part of Velcro until now but it comes lose. I’m now considering felt. It is a beautiful combination with paper. I must decide quickly now because I still need to make 5 items.

The electronics are done, I need to calibrate a few more wind sensors like this:

But I’m getting the hang of it. It needs a soft subsurface (not shown). Asserting pressure on the sensor board influences the values… We keep learning :)

breathing sound

Together with Richard we worked on the real-time sound synthesis using Csounds. Richard is an experienced musician and also a programmer. He’s creating an interface in Python for working with Csounds.

We had to know how long the sounds should be to create a breath soundscape. On the internet we found that the average breath rate is between 12 and 20 breath cycles per minute. My average breath rate is 6 per minute and when I’m very calm it is 3 per minute:

To be on the safe side we also recorded Richards’ breathing pattern which was indeed more towards the average:

On Richards’ graph we also printed the temperature values (the yellow line) which is good for indicating breathing in and out.

From the very start I had the idea to do something with the sound of bamboo wind chimes. I like their soothing sound when the wind makes them bang together. The idea was to create a chime from our breaths. As we started experimenting with the Csounds bamboo file (which is just code by the way) unexpected and fascinating sounds came out: