On May 12th I lead a breakout session at the second European quantified self conference in Amsterdam. The goal was to exchange experiences in breath and group tracking and to demo the new, wireless version of the breathing_time concept.
I started the breakout with an overview of the previous version. We soon got into a discussion on how hard it was to control your breathing rate. One participant used an Emwave device to try and slow down his breath rate. He could never quite make the target and therefore could never reach heart coherence which is frustrating. In my view the way to go is to become more and more aware of your breathing without intentionally wanting to change it. I went from chronic hyperventilation to an average breath rate of 4 times per minute without trying. Doing daily Zen meditation for lots of years has done it for me.
As usual people saw some interesting applications for the device I hadn’t thought of like working with patient groups. Another nice suggestion was to try out the placebo effect of just wearing the cone.
When it was time for the demo people could pick up one of the breathCatchers:
I’d managed to finish four wireless wearables. Working on 12 volt batteries with an Xbee module and an Arduino Fio for transmitting the data.
After some exploration we did two short breathing sessions so we could compare. The first was to just sit in a relaxed way and not really pay attention to the breathing (purple line). The second was to really focus on the breathing (grey line). The graph below shows the results:
Participants could look at the visual feedback but I noticed most closed their eyes to be able to concentrate better.
The last experiment was the unified visualisation of four participants. I asked them to pay close attention to the visualisation which represented the data as four concentric circles. A moving dot indicates breathing speed and moves depending on the breath flow.
It was fascinating to watch as the dots were moving simultaneously a lot of the time. However when asked how this session was experienced most participants saw the exercise as a game and were trying to overtake each other. They used “breath as a joystick”, to quote one of them. This was not my intention, the focus should be on the unifying aspect. I got some nice suggestions on how to achieve this: give more specific instructions and adapt the visuals to split the personal and communal data.
All in all we had a very good time exploring respiration and I’m grateful to all of the participants for their enthusiasm and valuable feedback.
I’ve been programming hard to shape the pages that will represent my life in the calendar. I’ve used Marcos’ statistics to make a nice backdrop for my pages using the average of stress, energy, mood and inner peace values. Layered on top of that are the distinguishing values for the above parameters. I’ve also already incorporated the diary, haiku’s and photographs. It might take some tweaking still but the basics are there. See for yourself:
And a day with less data:
So the horizontal lines is the energy, diagonal is stress, the V or upside down is the mood and the white circles represent inner peace. All vary in colour and repetition depending on the value. I do love the different patterns that are drawn. Quite surprising.
I’ve been experimenting with the design and data visualisation using the personal data values: mood, stress, energy level and inner peace. Depending on the data value the lines, shapes and tone of each visualisation varies. This will create a different structure for every timeslot in every day.
Inner peace will be a big organic and mysterious shape.
Energy and stress level will be pictured using horizontal an vertical lines respectively. The upper image is average energy and stress level. The lower is low energy and high stress.
Mood will be an arrow head/smiley pointing upwards or downwards. It is the most upper layer, clearly visible on the peace shape.
This is an experiment with combined output for 4 timeslots using real data. It gets a bit busy and the lines in different directions are making me feel a little giddy. So work to be done but it’s a promising start.
I participated with Eugene Tjoa. We created a mobile app that acts as a personal compass guiding you to the areas most beneficial for you.
After filling in your health profile you can choose an activity. By combining different datasets the application overlays the vicinity with a grid that indicates better, neutral or worse areas. By clicking on one of the tiles you get more information about the advice. For example if you are suffering from asthma areas with high pollution will be red, clicking on the tile will tell you more about the air quality.
If this area is not good for you but a little bit further on it is better this will be indicated by a green circle just outside the map. The position of the circle indicates the direction you should be heading:
Despite all the talk on open data it was hard to find suitable datasets, especially ones with a finer grain. It was a very docile experience for us. We learned a lot about making apps with Flex. Philips provided a nice atmosphere, good food and inspiration. All in all a very good experience.
I’ve been discovering the Flex development environment (Flash Builder) and the scripting language the last couple of weeks. I’ve been doing the very good on-line course which really gives you insight into the program and the object oriented approach that is used. It’s nice to test my knowledge of OOP and see how Flex compares to Java. In the course they use the MVC pattern to set up projects. The whole environment is set up so that it is easy to separate data from interface and functionality. Design can be styled in separate stylesheets. In design mode you can quickly create an application by dragging and dropping components.
After having worked with Flash for over 10 years this is such an improvement. Much credit of course goes to Eclipse, the base on which the Builder is build. I also love the Network Monitor. It lets you monitor all incoming and outgoing data in three views two of which being tree view (XML nodes) and raw (plain text). Binding is also a very important, new concept for me. It is used to bind data to UI components.
Flex can only take XML as input. Flash is a lot more flexible in that respect. So I have to format all the database data to XML which isn’t all that difficult once you get the hang of it. Below is my first Flex project using the data from the Collecting Silence database to determine the relationship between stress and silence. I’ve used a standard chart component to make a first visualisation and trying out the concepts.
Finally I can pick up the research about the correlation between silence and stress. This was of course the main goal of the Collecting Silence project but I never got round to really dive into it. So I picked up where I left two years ago.
I’ve worked on a sketch in Processing:
Blue = silence data, green = stress. I want to create a sort of landscape where the gaps between the two create the relation. I want to integrate this graph in an application where people can explore the data from the perspective of the correlation:
Rolling over the data lines displays the values and moves the map. You can pick a date and explore the data attached to that date.
I’m going to build the app in Flash/AS3 (as the website of the project is for a large part in Flash) and I’m trying to do it the OOP way again which is still quite hard for me.
The second performance at the TIK festival was very different from the first. The sound was on and everybody was present, according to the logs. But the animation wasn’t as nice. I realised later that this was due to poor data throughput. An installation was running that took up a lot of bandwidth at times. Not all the breath flows were visible. But it was still worthwhile I suppose judging from this nice picture by Annemie Maes:
I realised after both performances that this is only the start. I managed in a relatively short time to tackle all major hurdles but there’s a lot to be improved and added. I understood from the participants and the audience that they find it exciting to breath and create something together. So my idea of bringing people together through breath seems to work. I’d like to explore this further and I’m considering turning this into an open source project and develop a kit that people can work with so they can joint the community of breathers ;-)
The logs show even more differentiation then during the first performance:
I’m currently developing the device design and the software for breathing_time. I realise now that the form factor of the device must come from the possibilities of the wind sensor. It works best when the breath is guided to the sensor. That way I can detect inhaling and exhaling. I’ve been trying out different sizes of cones to fit on my face:
And finally settled for a size in between:
In this prototype I build the sensor into the cone which makes it nice and stable and catches the breath in an optimal way. I also tried a collar type design:
Where the “collar” catches the wind. It works fine and has some advantages but you can’t capture inhaling this way.
I’ve done some work on the software. I take the wind values and the temperature values from the sensor. The temperature values give a good indication of the direction of the breath. The combined data I use now to determine the direction of the animation (depending on in- or exhaling) and the colour. But of course a lot more things are possible.
I’ve constructed a “brush” from various shapes in different sizes. It will be nice to generate “brushes” dynamically, depending on the data. But for now I’m still refining the basic detection and I will continue with the aesthetics when that is stable.
I’ve upgraded my code for the stretch sensor graph in Processing. The sensor
outputs numbers. When I breath in or out the number gets lower. Detecting
breathing activity isn’t as easy as looking for a numbers below 200 for
example. Because over the time of wearing the sensor the whole range of
numbers starts to shift going either up or down. What remains is the sharp
decrease when breathing in. So now I’m comparing two averages of 5 rounds of
serial port activity each. When their difference is over 107% I know I’m
I’ve upgraded the first circuit with the potentiometers and the results look
promising. I used the Processing sketch that comes with the Arduino to make
a graphic of my breathing activity. I’ve tagged the different regions in the
graph so it’s easy to follow the movement of the breath. Btw the circuit is
constructed in such a way that the resistance decreases when the sensor is
stretched (breathing in.)
For this experiment I put the belt with the sensor rather tight around my
waist and I wasn’t talking. Talking makes the ‘not breathing’ part more
ragged but you can still clearly see when I’m breathing in and out.