Introducing Silence Suit

first sensors

Meditation stool with soft sensor and heart-rate sensor

For over a year I’ve been working on a meditation wearable. It measures biometric and environmental input. Its goals is to use the measurements to improve your meditation and use the data to generate artistic visualisations. The wearable is part of a bigger project Hermitage 3.0, a high-tech living environment for 21st century hermits (like me). Now that the wearable project is taking shape I’d like to tell a little about to process of creating it.

The sensors
I started with a simple but surprisingly accurate heart-rate sensor. It works with the Arduino platform. It uses an ear-clip and sends out inter beat intervals and beats per minute at every beat. With some additional code in Processing I can calculate heart-rate variability. These are already two important measures that can tell a lot about my state while meditating. Then I added galvanic skin response to measure the sweatiness of my skin, a nice indicator of stress or excitement. I added an analogue temperature sensor that I put on my skin to measure its temperature. Low skin temperature also indicates a state of relaxation. I also made a switch sensor that is attached to my meditation stool. Sitting on it indicates the start a session, getting up marks the end.
All sensors were connected with a wire to my computer but the aim was, of course, to make it wireless so I’d be free to move. But I could already see day to day changes in my measurements.

A little help from my friends
As things were becoming more complex I posted a request for help in a Facebook group. A colleague, Michel offered to help. We first looked at different ways to connect wirelessly. Bluetooth was a problem because it has very short range. Xbee also wasn’t ideal because you need a separate connector. We also made a version where we could write to an SD card on the device. But this of course doesn’t offer live data which was crucial for my plans. We finally settled for WiFi using the Sparkfun Thing Dev ESP8266. We were going to need a lot of analogue pins which the thing dev doesn’t offer. So we used the MCP3008 chip to supply 8 analogue i/o pins.

Overview of all the sensors

Overview of all the sensors

More is more
We could then increase the amount of sensors. We’ve added an accelerometer for neck position, replaced the analogue skin temperature sensor with a nice and accurate digital one. Around that time a wearable from another project was finished. It is a vest with resistive rubber bands that measures expansion of the chest and belly region. Using the incoming analogue values I can accurately calculate breath-rate and upper and lower respiration. Then it was time to add some environmental sensors. They give more context to for example GSR and skin temp readings. We’ve added room temperature and humidity, light intensity and RGB colour and air flow.

Vest with sensors

Vest with sensors

Environmental sensors

Environmental sensors

Seeing is believing
From the start I’ve made simple plots to get a quick insight into the session data. For now they don’t have an artistic purpose but are purely practical. At this point it is still essential to see if all sensors work well together. It’s also nice to get some general insight into how the body behaves during a meditation session.
Data is also stored in a structured text file. It contains minute by minute averages as well as means for the whole session.

Session data plot with legend

Session data plot with legend

I’ve also made a Google form to track my subjective experience of each session. I rate my focus, relaxation and perceived silence on a 7 point likert scale and there is a text field for a remark about my session.

Results from Google form: very relaxed but not so focussed...

Results from Google form: very relaxed but not so focussed…

Suit
I used the vest from the other project to attach the sensors to. But last week costume designer Léanne van Deurzen has made a first sample of the wearable. It was quite a puzzle for her and her interns to figure out the wiring and positioning of every sensor. I really like the look of this first design. It’s fits with the target group: high-tech hermits and it also is very comfortable to wear.

Upper and lower part of the suit

Upper and lower part of the suit

Back with extension where soft sensors to detect sitting will be placed

Back with extension where soft sensors to detect sitting will be placed

The future
The next step will be adding sensors for measuring hand position and pressure and a sound-level sensor.
Then we will have to make the processing board a bit smaller so it can fit in the suit. We can then start integrating the wiring and replacing it by even more flexible ones.
When all the sensors are integrated I can really start looking at the data and look for interesting ways to explore and understand it.
I’m also looking for ways to fund the making of 15 suits. That way I can start experiments with groups and find ways to optimise meditation by changing the environment.

e-Textile and data visualisation

Report of the meeting “Wearables and data visualisation” 13-6-13 @ V2_

Present: Ricardo O’Nascimento and Danielle Roberts (organisers), Anja Hertenberger, Meg Grant, Beam van Waardenberg

Skype: Annick Bureaud

Program:

- Introductions

- Look at and discuss examples from the web collected in a Pinterest board (http://pinterest.com/docentnv/wearable-dataviz/)

- Life demonstration

- Discussion

- Practical stuff

When preparing the meeting Danielle noticed that there are a lot of wearable viz that use knitting. We discussed why this is so. Data is logically related to patterns. Technically the Brother machines enable you to make your own patterns and connect them to an Arduino which can get input from a data stream. A drawback is that it can’t be live for most streams as the machine is too slow. We looked at the Neuro-knitting project and wondered if this is the real brainwave data? How close is the link to the actual data?

Ebru’s social knitting project where good news and bad news results in a change of colour while knitting is more a form of data logging. But it’s in real time. Other examples are: the conceptual knitting project: http://www.leafcutterdesigns.com/projects/creative-knitting-projects.html

What is data visualisation as opposed to data logging for example? Beam gave a clear explanation. Numbers are represented in a way that humans can understand. An RGB image is actually numbers transformed into colours. Our brain is the best data visualiser/interpreter. It simplifies the complex reality into something humans can grasp. So simplification is another essential. But it can easily become reduction. With reduction people lose the freedom to interpret all the layers of meaning. To get meaning from data combination of different sources and colourations between these sets is key.

Why is so many data on clothing static? To have dynamic data you need to work with electronics in clothing. As yet there is no fibre that can act as a carrier of information. Nano technology will be able to do so.

Maybe wearables are more suitable for data collection and screens more for displaying, Meg wondered? Screens are especially useful when looking back at collected data. And we all now carry our own personal screen with us in form of our Smartphone. Despite the drawbacks displaying data on you body can be significant Anja argued. Like people walking around with a sandwich board the wearer becomes part of the information. Or the funny guy at a party wearing a t-shirt with some crazy text on it. You can communicate about it. It can be seen as an extension. On the other hand part of your own identity is lost in transmitting the information.

Textile has a history as information carrier. Think about traditional embroidery to tell a story. And the use of colours and special gowns in different religions. This however has little relation to what we call data today. Data visualisation deals with Big data where data sets are combined, correlated and represented so we can derive meaning from them.

We’re still telling stories with data. This can take the form of life-logging. Ricardo’s Rambler shoe can be classified as a life-logging device. You can trace back your life from the track you’ve walked and share it on social media. Memoto is a company dedicated to life logging. Here too the device only captures images and GPS coordinates and the story is build in the database on screen.

Beam showed his brand new wearable that visualises space data from the sun. This is space data we can relate to. With other space data the connection is too thin and we lose interest. This wearable is a good example of simplifying a complex phenomenon, solar flares into an appealing visualisation that is dynamic but not changing too fast.

Danielle demoed an example of a wearable as a data collector. The True-Sense kit is a tiny bio sensor that can capture posture, EMG, EOG, EEG and Electrosmog. It can capture in real-time and log data. Brainwaves during sleep and meditation, activity, heart-rate, etc. All for €35. We are all very enthusiastic about it and will be organising a hands on meeting around it to explore its possibilities.

The practical stuff concerned the quality of the remote participation. This is very poor. Melissa suggested earlier that the e-Textile group should get a better microphone (±€120). Objections to this proposal were: who will own the microphone, will it help, isn’t the connection quality the main bottleneck? Our first step towards improvement will be to try out Google Hangout. So the next meeting we’ll be using that platform.

breathing_time at the Quantified Self conference

On May 12th I lead a breakout session at the second European quantified self conference in Amsterdam. The goal was to exchange experiences in breath and group tracking and to demo the new, wireless version of the breathing_time concept.

I started the breakout with an overview of the previous version. We soon got into a discussion on how hard it was to control your breathing rate. One participant used an Emwave device to try and slow down his breath rate. He could never quite make the target and therefore could never reach heart coherence which is frustrating. In my view the way to go is to become more and more aware of your breathing without intentionally wanting to change it. I went from chronic hyperventilation to an average breath rate of 4 times per minute without trying. Doing daily Zen meditation for lots of years has done it for me.

As usual people saw some interesting applications for the device I hadn’t thought of like working with patient groups. Another nice suggestion was to try out the placebo effect of just wearing the cone.

When it was time for the demo people could pick up one of the breathCatchers:

I’d managed to finish four wireless wearables. Working on 12 volt batteries with an Xbee module and an Arduino Fio for transmitting the data.

After some exploration we did two short breathing sessions so we could compare. The first was to just sit in a relaxed way and not really pay attention to the breathing (purple line). The second was to really focus on the breathing (grey line). The graph below shows the results:

Participants could look at the visual feedback but I noticed most closed their eyes to be able to concentrate better.

The last experiment was the unified visualisation of four participants. I asked them to pay close attention to the visualisation which represented the data as four concentric circles. A moving dot indicates breathing speed and moves depending on the breath flow.

It was fascinating to watch as the dots were moving simultaneously a lot of the time. However when asked how this session was experienced most participants saw the exercise as a game and were trying to overtake each other. They used “breath as a joystick”, to quote one of them. This was not my intention, the focus should be on the unifying aspect. I got some nice suggestions on how to achieve this: give more specific instructions and adapt the visuals to split the personal and communal data.

All in all we had a very good time exploring respiration and I’m grateful to all of the participants for their enthusiasm and valuable feedback.

non-woven wearable

Because I’m extending the breathing_time project into a workshop I’m doing some research on non-woven materials to make the cones from. The first version was made of paper and felt. It looked very nice but wasn’t very practical. Paper folds and crumbles easily. The felt on the face gets dirty and as it was glued to the paper I couldn’t replace it. The department of wearable senses from the TU/e kindly gave me some samples to experiment with. These are the results:

Lantor producer of all sorts of non-woven materials:

To start of with the best one. This is a thin, black non-woven. It’s very easy to work with. Can be glued with just ordinary collall glue. It sticks very well. The ease of working reminds one of paper. It has some nice extras. You can use sticky tape on it and you can remove that tape without leaving a trace, even after a few days:

This is very useful, it allows me to make a replaceable, protective edge. The bare edge is too sharp on the face. You can also glue two layers on top of each other to make the cone more firm. This has a very stylish appearance:

You can just use scissors to cut out the shape. And it doesn’t tear like paper. So attaching the strap is no problem.

I also tried another non-woven by Lantor. It has a felt like appearance. Very nice but it is too floppy for my purpose and quite hard to glue:

Colbond producer of all sorts of non-woven materials:

This semi transparent, thermally bounded non-woven has a very appealing look. It is stiff, even a bit sharp on the edges. I was really looking forward to trying this out but the result was a bit disappointing. It was hard to glue due to its’ open structure. It also turned out to be very brittle. A fold doesn’t go away (view right end). In that sense it is worse then paper. As I will be reusing these cones with different people they have to stay clean and in shape. This one didn’t stand that test.

about breathing_time

For the TIK festival documentation I wrote an article about breathing_time:

Background and concept

Breathing_time was conceived as part of the Time Inventors Kabinet[1] project for which I was an invited artist. The idea behind this project was to use different ecological input for creating new notions of time. Right from the start I had the idea to work with physiological data as input for a new time. Can we make time more personal if it is driven by our own body? Can we change our perceptions of time through growing awareness of the way our body functions? These were thoughts that motivated the work.

The concept of the windclock[2] was a central theme in the TIK project the most obvious physiological data to work with was breathing.

Early on in the project I had the idea of representing this personal data in a direct way using analogue techniques like drawing. I experimented a lot with ink and stains and made a hand driven drawing machine that drew a line of various thicknesses depending on the speed of breathing. I drew inspiration from Japanese calligraphy techniques, especially ensō[3]. While the idea of ink stayed it changed from analogue to digital: an animation with sound to represents the breath flow.

I wanted to work with a group of five people breathing at the same time and explore if becoming aware of someone else’s breathing pattern would influence your own and if we could reach a certain entrainment, our own rhythm. This resulted in two performances performed at the TIK festival.

Hardware

I build a custom device, the breathCatcher, using the JeeLabs RBBB Arduino[4] and the Modern Device Windsensor[5] and USB Bub[6]. The device is cone shaped to capture the breath flow in both directions. The wind sensor is placed in the opening of the cone. The cone should be worn over the nose and mouth. Breathing in and out through the nose is required. A felt ring protects the face from the sharp paper edge. A felt container at the bottom holds and protects the microcontroller. The paper device is connected to a PC by a cable using a USB-to-serial connection.

Sensor platform

For working with the sensor data I used the CommonSense platform[7]. I was sponsored by the Sense-os, the creators of that platform. CommonSense is an advanced online platform with comprehensive API[8] for working with sensor data. After creating an account you can create sensors, five in my case, and upload to and download data from the server. Different queries are possible and basic visualisation is available. That comes in very handy when you are developing.

I received a lot of help from Sense-os with connecting to the API and querying the database. All data is exchanged in JSON format which is very particular about quotes, which made it hard to work with.

For them the challenge lay in the near real time service of sending and receiving five times ten data points per second. I was advised to use a cable instead of Wifi to ensure minimal data loss.

Software

I wrote custom software, drawingBreath, in Processing[9]. I used some native Java and a few extra libraries and classes.[10] This software performs all the connections with the CommonSense API. It uses several timers to keep the tasks of sending and receiving data separated.

For 60 seconds the software calibrates all five devices so as to be able to detect the direction of the breath flow. Using the temperature sensor was very useful for that purpose.

After the breath flow has been calibrated the animation starts. Each of the five participants is represented by a ‘brush tip’ which will start to draw a circle. Going counter clockwise in red represents breathing in, the blue dot moving clockwise represents breathing out. The radius of the circle is determined by the strength of the breath flow as is the size of the tip and its’ colour intensity. In between breaths the drawing clears to start again.

Other software used in, and in aid of this project was Csound, Skype, Dropbox (view below) and NTP[11]. The latter was very important as the timestamp for every breath data point should be the same.

Adding sound

My friend Richard van Bemmelen, a musician and programmer kindly offered to help me add sound to the animation. My idea was to create a bamboo wind chime with our breaths. Creating a sound only when the breath status changed from in to out or vice versa. Richard is an advanced user of Csound[12] and wanted to use that program. As bamboo already exists as an Opcode[13] we could quickly start. The sound produced by Csound wasn’t the rattle of sticks but a far more beautiful flute-like sight. The pitch depends on the value of the breath flow data. To make everything work on all the participants’ PCs Csound had to be installed. A custom .csd file which defines the settings for the synthesizer is placed in that folder. To make starting of the sound part easy Richard created a batch file that would start Csound and make it wait for messages from Processing. For communicating with Csound the oscP5 library[14] was used in Processing. A message with the breath value was send whenever the breath status changed.

The performances

breathing_time was a networked performance. I’ve selected five people from different nationalities to partake in the experiment. With that I wanted to underline the universal character of breathing. From five different locations these five people would create sound and visuals using only their breath. Because of the drawingBreath software all participants saw the same animation and heard the same sounds. This output could act as feedback for them. I was in Brussels performing for an audience that saw and heard the same things as the participants.

One thing that took a lot more effort then anticipated was preparing the participants for the actual performances. To test the server and different versions of the software we had planned four test sessions at the start. But first all software had to be installed on the different computers. Right at the beginning I had to move everybody to the Windows platform as running the Processing application made on a Windows PC on a Mac appeared to be a hassle. Also the drivers for the USB Bub were absent for the Mac.

Having equipped two participants with my old laptops we could start testing. The Sense-os server did a very good job. The main problem was instructing everybody and making sure that the software and Csound updates were put in the right folders. I used Dropbox[15] to supply updates and manuals but even that was hard for some people. Through Skype I gave live instructions and could answer questions of all participants at the same time. After a good final rehearsal it was time for the real thing.

The performances started with each participant introducing him/herself in a pre-recorded sound file in both their mother tongue and English. At exactly 19:00 hours everybody would start their drawingBreath program and calibration started as the introductions continued.

Our assignment for the performances was: relax and breath naturally. Try to detect your own breath circle and see if you can leave some time between each breath. If this moment of in between breaths would coincide the screen would be cleared and we would have reached some sort of communal breathing.

The most important thing I learned from the performances is that breathing is a very personal thing that isn’t easily manipulated. This shows very well from the CommonSense logs where you can see the breathing pattern almost as a signature.[16] Our breathing gaps didn’t coincide but the different movements of the breath flows was interesting to watch.

I also realised that although the performances went reasonably well this is just the beginning. There are so many things that could be improved for which I just lacked the time. Enthusiastic reactions have brought to me new ideas of working with the concept. I’m considering creating an online community to improve the hard- and software. To breath together online and explore the idea of creating a communal “breathing time” further.

Specifications

drawingBreath software (Processing & Java), breathCatcher hardware (Arduino RBBB, Modern Device Wind sensor, USB Bub, USB cable, paper, felt, elastic band), sensor platform (CommonSense API), sound (Csound & Processing)

Credits

Concept, design, development & programming: Danielle Roberts

Sound: Richard van Bemmelen

CommonSense API: Sense-os

Participants: Adriana Osorio Castrillon, Lorenzo Brandli, Mieke van den Hende, Tomoko Baba

Location: Imal, Brussels

Also made possible by OKNO

Blog: http://www.numuseum.nl/blog/category/breathing_time/



[1] http://timeinventorskabinet.org/

[2] http://www.timeinventorskabinet.org/wiki/doku.php/windclocks

[3] en.wikipedia.org/wiki/Ensō

[4] http://jeelabs.com/products/rbbb

[5] http://shop.moderndevice.com/products/wind-sensor

[6] http://jeelabs.com/products/usb-bub

[7] http://www.sense-os.nl/commonsense

[8] http://www.sense-os.nl/api-console

[9] http://processing.org/

[10] Processing serial and net, guicomponents GTimer class, org.json and Java.net.URL and URLConnection classes

[11] http://www.meinberg.de/english/sw/index.htm

[12] http://www.csounds.com/

[13] http://www.csounds.com/manual/html/bamboo.html

[14] http://www.sojamo.de/libraries/oscP5/

[15] www.dropbox.com

[16] http://www.numuseum.nl/blog/2012/05/11/performance-11-5/

wind sensor demo

Yesterday we managed to get the first wind sensor working. Together with Richard I  connected the sensor to the RBBB board, calibrated it and read some wind/breath data from the serial port into Processing. It works surprisingly well:

I have already noticed that the distance from the sensor to the nostrils is critical. I’ll have to experiment. Richard had a nice idea. I should use the tubes to direct the air from the nostrils straight to the sensor for the most accurate result. I will give it a try. It will make the wearable more scary but that doesn’t have to be a problem :)

curing Bluetooth

Yesterday I worked with Michel from the A&T lab to discover what was wrong with my Bluetooth connection. It was interesting to see how Michel went about it:

Michel testing the Bluetooth module

  • We first tried to get data from the set up as it was. Nothing happened just like last week.
  • We then used another Arduino to check of we could connect to the Bluetooth module. With the Mac software CoolTerm we verified that data was send over Bluetooth.
  • When this was the case we tried my code. The was an error with the Wire library. We commented out that bit of code and tested it and for the first time since a long time I saw the formatted string with sensor values.
  • The next step was to test the code on the Arduino mini in the wearable. Michel made sure the TR and TX wires were crossed. Still nothing happened.
  • We then replaced the rechargeable batteries with the 9 volt battery Michel had used. It worked in CoolTerm!
  • I started the test AQAb app on my Android and yes! a nice formatted string appeared on the screen.

To conclude I think the problem was a mixture of problems: wrong wiring, an error in the code (Wire library) and not enough power. But now we can continue with the app, with a big thank you to Michel.