Trying to get the complexity of the project

For me the project becomes more and complex. So in this blog post I will try to give you an overview of different aspects of the project around the Silence Suit, the correlations of these aspects and my difficulties with this complexity.

Developing the button further
To keep it concrete, I will first explain to you what we did today. I helped Danielle to develop further the button I introduced to you last time. We learned that it is not necessary to use this button to mark an exceptional experience while meditating. But we want to use it to mark a moment when something in the environment changes. Think about the changing of light or loud noise. We want to mark the moment in your timeline when something happens which influences your meditation session so you can see later the impact by analysing your data. So we try to include the button in the suit as comfortable as possible. First we thought about a glove, but now we found out that a ring should be better. The button has to be as small as possible. We also think about assembling the button like we made the sitting sensor with conductive fabric. That would make it much more comfortable if there is no hard piece on the ring.

button ring - to mark negative influences of your surroundings

button ring – to mark negative influences of your surroundings

Baseline measurements
But why is it not necessary to mark an extraordinary positive experience anymore? That has to do with the artificial intelligence of the software. I find it difficult to understand how it works precisely. But we learned from the data scientist that the software will learn itself what a good meditation session is. To make it a learning system you need many baseline measurements. A baseline measurement means that you track your meditation session without any actuation. Before and after meditating you have to fill in the questionnaire developed by Danielle in consultation with different experts. She has formulated many questions which are relevant. By detecting a minimum of 30 sessions in combination with this questionnaire the system can start to figure out which aspects are the most important to make it a good session.

questionnaire - by Google Forms

questionnaire – by Google Forms

Between scientific research and design thinking
It is important that the data are correct so you can utilize them in a scientific way. That is among other things one aspect which makes the project so complex. On the one hand it is a scientific research. On the other hand the suit arises from a design mentality, which intends to make it as chic and as comfortable as possible at the same time. Otherwise the user will not use it for his own scientific research. Furthermore, Danielle has her vision as an artist to bring all these disciplines together to create a completely new and unknown outcome. The Silence Suit is actually a small part of the bigger vision of Hermitage 3.0. But how does Danielle handle the complexity of her vision? I think one aspect is among others that she assumes different kind of roles in the project. At one point she assumes the role of the researcher and at another point she really thinks as a designer. That makes it possible to keep the complexity. To deepen different aspects, she asks different kinds of experts for help.

Costumer journey, flow charts and wire frames
That is also how she worked to develop the wire frames. We have to think about what the screen will look like, so that the user will know how to use the database for his own interests. First, Danielle assumed different kinds of roles as users. She developed a costumer journey for each user. From this costumer journey an expert has created a flow chart. That brings the costumer journey to a more abstract level. The flow chart serves as an intermediate step from costumer journey to the wire frames. The wire frames will finally indicate the functionality of each screen, so that every user can use it for his own interests.

Flow chart - one user case

Flow chart – one user case, by Anne van den Heuvel

So as you can see there are many things in development. Many things are going well and every team member is working hard to bring the Silence Suit to a higher level. Of course, there are still many things which have to be explored, but that keeps it interesting. I find it really nice to see that after so many organizational problems in the beginning, we really make great steps to realize a meaningful research project. Next week, we will visit the DesignLab Twente where we will meet Vera de Pont to bring the electronics and the new design of the suit together. I am really excited about that meeting and I hope to give you another inspiring insight in our project next time.

Bewaren

Bewaren

Bewaren

Bewaren

Exploring a new button

I honestly have to say that the project seems to go really well. I enjoy every day of my internship because every week there is something new to develop. Every time I am excited what comes next and which idea’s will be altered and which will be completely new. As every week I will give you a little impression of what has happened recently.

The design of the Silence Suit is in development. Vera de Pont works hard to optimize the sketches and to start sewing as soon as possible. This week she came along to show some different cloths. She also presented her newest sketches of the suit.

Bottom layer of Silence Suit by Vera de Pont

The idea of a contemporary monk is taking shape.  The air circulation is also optimized by including the pattern in the design. To decorate the suit in a practical way, she plans to embroider graphic icons on the pockets of the different sensors. So, you know all the sensors and it simplifies the maintenance of the suit after washing it.

embroidery - graphic icons

embroidery – graphic icons

We have to work on the artificial intelligence part of the system. At a certain point the system has to know what a good meditation session is to influence it in a positive way. The goal is to program a good meditation session. The programmer wants to know constitutes meditation quality? To answer that question a lot of tests have to be done.

By means of a questionnaire in combination with the data of the session Danielle wants to do research about the quality of the meditation. Therefore, she plans to include a new sensor in the suit and we already did some tests with it this week. The plan is to lengthen one sleeve of the under vest to a glove. We have to include two buttons in the glove that you can push while meditating without moving that much.

button - to mark an extraordinary positive or negative experience

button – to mark an extraordinary positive or negative experience

One button will mark an extraordinary positive experience in the timeline of the session. You have to push the other one if there is a negative influence of your surroundings. For example, if the light instrument falls down or there is some background noise you can push the button. The system will mark that point in your timeline and you can see afterwards the effect of that occurrence on your meditation.

The form as well as the content of the Silence Suit are in development. As you can see every week we are making steps to get a grip on the complexity of the project.

Bewaren

Bewaren

Bewaren

Bewaren

Virtual View: building an experiment

I was very lucky to meet Ilia from Okazolab. When I told him about Virtual View and the research I was planning to do he offered me a licence to work with EventIDE. This is a state of the art stimulus creation software package for building (psychological) experiments with all kinds of stimuli. Ilia has build this software which was, at the time I met him, still under development. Besides letting me use the software he offered to build an extension to work with the Heartlive sensor. He’s been very supportive in helping me to build my first experiment in EventIDE.

It is a very powerful program so it does take a while to get the hang of it. The main concept is the use of Events (a bit similar to slides in a PowerPoint presentation) and the flow between these events. Each event can have a duration assigned to it. On the events you can place all kinds of Elements ranging from bitmap renderers to audio players and port listeners. Different parts of the Event time line can have snippets of code attached to it. The program is written in .NET and you can do your coding in .NET and also use XAML to create a gui screen and bind items like buttons or sliders to variables which you can store.

You can quickly import all the stimuli you want to use and manage or update them in the library. From the library you drag an item onto a renderer Element so it can be displayed and gets a unique id. We’ll use this id to check to responses to the individual images.

The Events don’t have to follow a linear path. You can make the flow of the experiment conditional. So for my design I made a sub layer on the main Event time line which holds the sets of images and sounds. The images in each set are randomised by a script and so are the sets themselves as we want to rule out the effect of order of the presentation. So in the picture you can see the loop containing a neutral stimulus, 6 landscape pictures with a sound and a questionnaire. This runs 5 times and goes to the Event announcing the end of the experiment. During the baseline measurement and the sets the heart rate of the participant is measured. And the answers to the questions belonging to each set are logged.

Data acquisition and storage is managed with the Reporter element. You can log all the variables used in the program and determine the layout of the output. After the trial you can export the data directly to Excel or a text or csv file. Apart from just logging the incoming heart rate values we calculated mean from them inside EventIDE for each image and for the baseline measurement. This way we can see at a glance what is happening with the responses to the different images.

For me it was kind of hard to find my way in the program. What snippet goes where, how do I navigate to the different parts of the experiment? But the more I’ve worked with the program the more impressed I’ve become. It feels really reliable and with the runs history you are sure none of your precious data is lost.

hard and software done

I’ve spend two days working on the @cocoon application. I got a lot of help from Rob, Eric and also from Wim. Despite all our hard work we didn’t manage to get the integrated piece to work.


It’s interesting how I had to scale down the functionality as development progressed. From the interactive prototype that measured noise level and heart-rate and adjusted its’ behaviour accordingly, I’ve now landed at a green pulsating cocoon that plays nature sounds. No measuring, no interactivity. And I still have to upload the final code for it to work… The isolation mission starts in an hour and I hope I have time to finish the piece so we can at least experience something.

The main hurdle was the unpredictability of the hardware. Especially the noise sensor is crap or perhaps it’s broken but it behaved completely unpredictable from the start. Outputting a lot of zero’s and just plain noise with very little relation to the actual sound level. Only peak noises stand out but not when there’re several in succession… Rob says it is oscillating heavily and that it would take two weeks to sort it out and really have a reliable sensor.

I bought a new version of the pulse sensor, the amped version. On it’s own it worked quite well after some initial calibration. But it had to be integrated with the pulsating green light. Rob wrote a special buffer library so I could use the internal clock that was already in place for the heart-beat sensing. We got that to work together. But if we added the sound sensing the whole code became unstable.

After that we tried the solution using a switch to activate the two parts of the system: relaxation and blue light (just because it was so nice). During that process we discovered a problem with the relay used to switch on the stereo sound playing on a separate MP3 player. We had a major meltdown on Monday during which the IC was destroyed. Last night at eleven we discovered that also the relay was dead. It will only switch on.
Probably because it was so late we didn’t even get the fading to run, but I think I solved that on the train going home. So in isolation I’ll try to get the most basic mode running. Which is probably where we should have started…

Trying to build this system in a couple of days has been a very educative experience for me. I’ve learned a lot for Rob and Eric who are real pro’s. According to Rob testing time is three times the time spend on development… I’m beginning to see he’s right.

A couple of hours later I was able to get the cocoon working. Now I just have to put in up in the Seeker.

about breathing_time

For the TIK festival documentation I wrote an article about breathing_time:

Background and concept

Breathing_time was conceived as part of the Time Inventors Kabinet[1] project for which I was an invited artist. The idea behind this project was to use different ecological input for creating new notions of time. Right from the start I had the idea to work with physiological data as input for a new time. Can we make time more personal if it is driven by our own body? Can we change our perceptions of time through growing awareness of the way our body functions? These were thoughts that motivated the work.

The concept of the windclock[2] was a central theme in the TIK project the most obvious physiological data to work with was breathing.

Early on in the project I had the idea of representing this personal data in a direct way using analogue techniques like drawing. I experimented a lot with ink and stains and made a hand driven drawing machine that drew a line of various thicknesses depending on the speed of breathing. I drew inspiration from Japanese calligraphy techniques, especially ensō[3]. While the idea of ink stayed it changed from analogue to digital: an animation with sound to represents the breath flow.

I wanted to work with a group of five people breathing at the same time and explore if becoming aware of someone else’s breathing pattern would influence your own and if we could reach a certain entrainment, our own rhythm. This resulted in two performances performed at the TIK festival.

Hardware

I build a custom device, the breathCatcher, using the JeeLabs RBBB Arduino[4] and the Modern Device Windsensor[5] and USB Bub[6]. The device is cone shaped to capture the breath flow in both directions. The wind sensor is placed in the opening of the cone. The cone should be worn over the nose and mouth. Breathing in and out through the nose is required. A felt ring protects the face from the sharp paper edge. A felt container at the bottom holds and protects the microcontroller. The paper device is connected to a PC by a cable using a USB-to-serial connection.

Sensor platform

For working with the sensor data I used the CommonSense platform[7]. I was sponsored by the Sense-os, the creators of that platform. CommonSense is an advanced online platform with comprehensive API[8] for working with sensor data. After creating an account you can create sensors, five in my case, and upload to and download data from the server. Different queries are possible and basic visualisation is available. That comes in very handy when you are developing.

I received a lot of help from Sense-os with connecting to the API and querying the database. All data is exchanged in JSON format which is very particular about quotes, which made it hard to work with.

For them the challenge lay in the near real time service of sending and receiving five times ten data points per second. I was advised to use a cable instead of Wifi to ensure minimal data loss.

Software

I wrote custom software, drawingBreath, in Processing[9]. I used some native Java and a few extra libraries and classes.[10] This software performs all the connections with the CommonSense API. It uses several timers to keep the tasks of sending and receiving data separated.

For 60 seconds the software calibrates all five devices so as to be able to detect the direction of the breath flow. Using the temperature sensor was very useful for that purpose.

After the breath flow has been calibrated the animation starts. Each of the five participants is represented by a ‘brush tip’ which will start to draw a circle. Going counter clockwise in red represents breathing in, the blue dot moving clockwise represents breathing out. The radius of the circle is determined by the strength of the breath flow as is the size of the tip and its’ colour intensity. In between breaths the drawing clears to start again.

Other software used in, and in aid of this project was Csound, Skype, Dropbox (view below) and NTP[11]. The latter was very important as the timestamp for every breath data point should be the same.

Adding sound

My friend Richard van Bemmelen, a musician and programmer kindly offered to help me add sound to the animation. My idea was to create a bamboo wind chime with our breaths. Creating a sound only when the breath status changed from in to out or vice versa. Richard is an advanced user of Csound[12] and wanted to use that program. As bamboo already exists as an Opcode[13] we could quickly start. The sound produced by Csound wasn’t the rattle of sticks but a far more beautiful flute-like sight. The pitch depends on the value of the breath flow data. To make everything work on all the participants’ PCs Csound had to be installed. A custom .csd file which defines the settings for the synthesizer is placed in that folder. To make starting of the sound part easy Richard created a batch file that would start Csound and make it wait for messages from Processing. For communicating with Csound the oscP5 library[14] was used in Processing. A message with the breath value was send whenever the breath status changed.

The performances

breathing_time was a networked performance. I’ve selected five people from different nationalities to partake in the experiment. With that I wanted to underline the universal character of breathing. From five different locations these five people would create sound and visuals using only their breath. Because of the drawingBreath software all participants saw the same animation and heard the same sounds. This output could act as feedback for them. I was in Brussels performing for an audience that saw and heard the same things as the participants.

One thing that took a lot more effort then anticipated was preparing the participants for the actual performances. To test the server and different versions of the software we had planned four test sessions at the start. But first all software had to be installed on the different computers. Right at the beginning I had to move everybody to the Windows platform as running the Processing application made on a Windows PC on a Mac appeared to be a hassle. Also the drivers for the USB Bub were absent for the Mac.

Having equipped two participants with my old laptops we could start testing. The Sense-os server did a very good job. The main problem was instructing everybody and making sure that the software and Csound updates were put in the right folders. I used Dropbox[15] to supply updates and manuals but even that was hard for some people. Through Skype I gave live instructions and could answer questions of all participants at the same time. After a good final rehearsal it was time for the real thing.

The performances started with each participant introducing him/herself in a pre-recorded sound file in both their mother tongue and English. At exactly 19:00 hours everybody would start their drawingBreath program and calibration started as the introductions continued.

Our assignment for the performances was: relax and breath naturally. Try to detect your own breath circle and see if you can leave some time between each breath. If this moment of in between breaths would coincide the screen would be cleared and we would have reached some sort of communal breathing.

The most important thing I learned from the performances is that breathing is a very personal thing that isn’t easily manipulated. This shows very well from the CommonSense logs where you can see the breathing pattern almost as a signature.[16] Our breathing gaps didn’t coincide but the different movements of the breath flows was interesting to watch.

I also realised that although the performances went reasonably well this is just the beginning. There are so many things that could be improved for which I just lacked the time. Enthusiastic reactions have brought to me new ideas of working with the concept. I’m considering creating an online community to improve the hard- and software. To breath together online and explore the idea of creating a communal “breathing time” further.

Specifications

drawingBreath software (Processing & Java), breathCatcher hardware (Arduino RBBB, Modern Device Wind sensor, USB Bub, USB cable, paper, felt, elastic band), sensor platform (CommonSense API), sound (Csound & Processing)

Credits

Concept, design, development & programming: Danielle Roberts

Sound: Richard van Bemmelen

CommonSense API: Sense-os

Participants: Adriana Osorio Castrillon, Lorenzo Brandli, Mieke van den Hende, Tomoko Baba

Location: Imal, Brussels

Also made possible by OKNO

Blog: http://www.numuseum.nl/blog/category/breathing_time/



[1] http://timeinventorskabinet.org/

[2] http://www.timeinventorskabinet.org/wiki/doku.php/windclocks

[3] en.wikipedia.org/wiki/Ensō

[4] http://jeelabs.com/products/rbbb

[5] http://shop.moderndevice.com/products/wind-sensor

[6] http://jeelabs.com/products/usb-bub

[7] http://www.sense-os.nl/commonsense

[8] http://www.sense-os.nl/api-console

[9] http://processing.org/

[10] Processing serial and net, guicomponents GTimer class, org.json and Java.net.URL and URLConnection classes

[11] http://www.meinberg.de/english/sw/index.htm

[12] http://www.csounds.com/

[13] http://www.csounds.com/manual/html/bamboo.html

[14] http://www.sojamo.de/libraries/oscP5/

[15] www.dropbox.com

[16] http://www.numuseum.nl/blog/2012/05/11/performance-11-5/

test session

I’ve been working like mad for last couple of weeks to get the ‘drawingBreath’ software going. Main issues:

  • working with the sense-os API, more specificly formatting the string to be send to and retrieved from the server
  • getting the custom software to work on the various PCs
  • making the software work for five sensors in stead of one

From the above you can tell that I’m just an artist struggling to program without proper education. But I have learned a lot again especially about JSON in Java and iterations. And was happy I finished my two Java courses, at least now I had a good idea of what I was doing. The software can now do the following:

  • Login to the sense-os platform and get a session id
  • List all the ids of the 5 sensors
  • Read data from the serial port
  • Format (JSON) and send that data with a time stamp
  • Retrieve the data from all 5 sensors
  • Calibrate all 5 sensors
  • Make a drawing for every sensor
  • Make sounds for every sensor
  • The different tasks are all conducted by separate timers

I only want to fine tune the drawing and the speed of the drawing but for the most part it’s finished(!).

I’ve conducted some test sessions with a smaller group but yesterday evening was the first time there was four of us. It went surprisingly well. No problems with the server, it was a bit unreliable lately. And the visual and audio results were promising:

work in progress

I’m working on the breath detection software and the device design. I’ve concentrated on detecting the breathing in and out and the event between breaths. Especially the distinction between in and out is hard as there is wind flow in both cases. So I use the difference in temperature to detect the in and out. I set two calibration points (by pressing a different key), one between breaths and one after completely inhaling. In both cases I take the wind value and the temperature value. With these two extremes known I can now detect the breath status in a rather robust way. View the screen dump from Processing:

I rather like the space look of the breathing cone. More work should be done on it of course:

device and software

I’m currently developing the device design and the software for breathing_time. I realise now that the form factor of the device must come from the possibilities of the wind sensor. It works best when the breath is guided to the sensor. That way I can detect inhaling and exhaling. I’ve been trying out different sizes of cones to fit on my face:

And finally settled for a size in between:

In this prototype I build the sensor into the cone which makes it nice and stable and catches the breath in an optimal way. I also tried a collar type design:

Where the “collar” catches the wind. It works fine and has some advantages but you can’t capture inhaling this way.

I’ve done some work on the software. I take the wind values and the temperature values from the sensor. The temperature values give a good indication of the direction of the breath. The combined data I use now to determine the direction of the animation (depending on in- or exhaling) and the colour. But of course a lot more things are possible.

I’ve constructed a “brush” from various shapes in different sizes. It will be nice to generate “brushes” dynamically, depending on the data. But for now I’m still refining the basic detection and I will continue with the aesthetics when that is stable.

work monitoring

I’ve been working on a Python program today that monitors which application I’m using. For now it’s writing the data with a timestamp to a text file. But of course I want to share this data on numuseum. I’m now researching the use of sockets. As only just discovered that Flash can also work with sockets… That’s very interesting for me, I should be able to push life data to a Flash film. Wow!

Work_monitoring

numuseum archive backend

I finally managed to get all the stuff about the back-end workings of the archive part of numuseum from my head into a diagram. It’s a great relief to see all the subjects in their different visualisations attached to the correct tools and APIs. I’ve got so much room in my head now that I’ve got new ideas for an interface metaphor. To be continued…

Numuseumarchief_map