On May 12th I lead a breakout session at the second European quantified self conference in Amsterdam. The goal was to exchange experiences in breath and group tracking and to demo the new, wireless version of the breathing_time concept.
I started the breakout with an overview of the previous version. We soon got into a discussion on how hard it was to control your breathing rate. One participant used an Emwave device to try and slow down his breath rate. He could never quite make the target and therefore could never reach heart coherence which is frustrating. In my view the way to go is to become more and more aware of your breathing without intentionally wanting to change it. I went from chronic hyperventilation to an average breath rate of 4 times per minute without trying. Doing daily Zen meditation for lots of years has done it for me.
As usual people saw some interesting applications for the device I hadn’t thought of like working with patient groups. Another nice suggestion was to try out the placebo effect of just wearing the cone.
When it was time for the demo people could pick up one of the breathCatchers:
I’d managed to finish four wireless wearables. Working on 12 volt batteries with an Xbee module and an Arduino Fio for transmitting the data.
After some exploration we did two short breathing sessions so we could compare. The first was to just sit in a relaxed way and not really pay attention to the breathing (purple line). The second was to really focus on the breathing (grey line). The graph below shows the results:
Participants could look at the visual feedback but I noticed most closed their eyes to be able to concentrate better.
The last experiment was the unified visualisation of four participants. I asked them to pay close attention to the visualisation which represented the data as four concentric circles. A moving dot indicates breathing speed and moves depending on the breath flow.
It was fascinating to watch as the dots were moving simultaneously a lot of the time. However when asked how this session was experienced most participants saw the exercise as a game and were trying to overtake each other. They used “breath as a joystick”, to quote one of them. This was not my intention, the focus should be on the unifying aspect. I got some nice suggestions on how to achieve this: give more specific instructions and adapt the visuals to split the personal and communal data.
All in all we had a very good time exploring respiration and I’m grateful to all of the participants for their enthusiasm and valuable feedback.
The second performance at the TIK festival was very different from the first. The sound was on and everybody was present, according to the logs. But the animation wasn’t as nice. I realised later that this was due to poor data throughput. An installation was running that took up a lot of bandwidth at times. Not all the breath flows were visible. But it was still worthwhile I suppose judging from this nice picture by Annemie Maes:
I realised after both performances that this is only the start. I managed in a relatively short time to tackle all major hurdles but there’s a lot to be improved and added. I understood from the participants and the audience that they find it exciting to breath and create something together. So my idea of bringing people together through breath seems to work. I’d like to explore this further and I’m considering turning this into an open source project and develop a kit that people can work with so they can joint the community of breathers ;-)
The logs show even more differentiation then during the first performance:
Last Friday was the première of the breathing_time performance. I was very, very nervous. So nervous that I forgot to start the sound software… But the animation was beautiful and the data came through very well. One participant wasn’t present, I don’t know what went wrong, he came online at 19:15 as you can see from the logs below.
The logs show very nicely how the breathing patterns of all participants differ:
These visualisations are from the sense-os website, the timeline view. Here are some of the drawingBreath visuals:
As an encore I did a little session with sound by myself.
Together with Richard we worked on the real-time sound synthesis using Csounds. Richard is an experienced musician and also a programmer. He’s creating an interface in Python for working with Csounds.
We had to know how long the sounds should be to create a breath soundscape. On the internet we found that the average breath rate is between 12 and 20 breath cycles per minute. My average breath rate is 6 per minute and when I’m very calm it is 3 per minute:
To be on the safe side we also recorded Richards’ breathing pattern which was indeed more towards the average:
On Richards’ graph we also printed the temperature values (the yellow line) which is good for indicating breathing in and out.
From the very start I had the idea to do something with the sound of bamboo wind chimes. I like their soothing sound when the wind makes them bang together. The idea was to create a chime from our breaths. As we started experimenting with the Csounds bamboo file (which is just code by the way) unexpected and fascinating sounds came out:
I’m working on the breath detection software and the device design. I’ve concentrated on detecting the breathing in and out and the event between breaths. Especially the distinction between in and out is hard as there is wind flow in both cases. So I use the difference in temperature to detect the in and out. I set two calibration points (by pressing a different key), one between breaths and one after completely inhaling. In both cases I take the wind value and the temperature value. With these two extremes known I can now detect the breath status in a rather robust way. View the screen dump from Processing:
I rather like the space look of the breathing cone. More work should be done on it of course:
I’m currently developing the device design and the software for breathing_time. I realise now that the form factor of the device must come from the possibilities of the wind sensor. It works best when the breath is guided to the sensor. That way I can detect inhaling and exhaling. I’ve been trying out different sizes of cones to fit on my face:
And finally settled for a size in between:
In this prototype I build the sensor into the cone which makes it nice and stable and catches the breath in an optimal way. I also tried a collar type design:
Where the “collar” catches the wind. It works fine and has some advantages but you can’t capture inhaling this way.
I’ve done some work on the software. I take the wind values and the temperature values from the sensor. The temperature values give a good indication of the direction of the breath. The combined data I use now to determine the direction of the animation (depending on in- or exhaling) and the colour. But of course a lot more things are possible.
I’ve constructed a “brush” from various shapes in different sizes. It will be nice to generate “brushes” dynamically, depending on the data. But for now I’m still refining the basic detection and I will continue with the aesthetics when that is stable.
Constructing the RBBB Arduino isn’t as easy as suggested. The manual says 30 minutes. If I don’t make any mistakes (to which I’m prone) I might be able to do it in an hour but no less. After I didn’t manage to get the first one to work I started out with a fresh one (I’ve got to build five.) I only made one mistake and when I plugged in the USB cable (be sure to chose Uno from the board list) and uploaded the blink sketch it worked!
I want to give the users of the device some direct feedback about their own breathing pattern. So I’m thinking about an LED that dims when you exhale and brightens when you inhale. I tried just putting an LED in the tube and the result was really nice:
Apart from the light I want to make the users more aware of their breathing getting input straight from their body. Tying the micro-controller to the chest (inside some kind of tubing) with an elastic band you become more aware of your breath in a subtle way:
And I did some research on flexible tubing. It looks nice and it is functional so I’ll probably go for this solution:
So things are shaping up. I’m eagerly awaiting the wind sensors from the states…
I’ve been doing some research and sketching for the breathing_time breath detection device. It could take the shape of piece of clothing or lean more towards a device/tool look. I’ve explored the possibilities:
I think the wearable variation is a bit too much:
As air is such a subtle, immaterial thing I want to underline that in the device, version 0, make a knot:
Version 1, use these really clean parts from the aquarium branch. It is actually used for oxygen for the fish so that’s a nice extra. I may add a box for the board and the wind sensor can be adjusted in front of the nose:
I think I’ll provide some respiration feedback in the device too. It will make it more lively.
I’ve managed to program the first representation of breathing-time. This is
a sequence of the way the drop changes when I’m actually breathing while
wearing the stretch sensor. The dept of the breath determines the diameter
and alpha value of the ‘drop’. The breath rate determines the blurring and
horizontal position of the drop. Slow but deep breathing results in a soft
and blurry spot. I’ll post an example of fast breathing later.
I’ve upgraded my code for the stretch sensor graph in Processing. The sensor
outputs numbers. When I breath in or out the number gets lower. Detecting
breathing activity isn’t as easy as looking for a numbers below 200 for
example. Because over the time of wearing the sensor the whole range of
numbers starts to shift going either up or down. What remains is the sharp
decrease when breathing in. So now I’m comparing two averages of 5 rounds of
serial port activity each. When their difference is over 107% I know I’m