Soft Meditation, first prototype

For the past couple of weeks I have been working on the first version of my Soft Meditation piece. This is a performance in which I meditate and live data is transformed into an animated, artistic visualisation.

Soft Meditation Performance photograph by Kristin Neidlinger

Soft Meditation Performance photograph by Kristin Neidlinger

Background

For the past year I have been developing, together with a team, Meditation Lab Experimenter Kit. A tool-kit that consists of a suit with sensors and software which allows you to monitor and optimise you meditation practice through self-experimentation and interaction with the environment.
Soft Meditation is the first application made with this tool-kit. It uses the API to create generic imagery from live sensor data collected with the suit. My aim is to explore whether donating personal data can create a positive, meditative effect in others even though they aren’t meditating themselves.

Why soft?

The title of the performance refers to the environmental psychology term soft fascination coined by Kaplan and Kaplan as part of their attention restoration theory. In my own words: the theory describes how looking at natural phenomena like waves on the water captures your attention without causing any cognitive strain. That way the mind can restore and refresh. Meditation is all about attention and I am looking for an easy way to capture the visitors attention and take them to a place of calm.
Trying to do this with meditation is, despite of popular belief, quite hard work. So soft also refers to the gentle and playful way in which, I hope, a meditative state of mind is achieved.

Inspiration For Soft Meditation

Inspiration For Soft Meditation

How soft?

But how do I capture attention in a way that is calming and uplifting? I’ve read some articles (view references below) about the affective properties of motion graphics and compiled an inventory of effects. For my goal it would be best to use slow, linear motion from left to right. I could then play with speed and waviness to create more intensity and interest depending on the sensor data in direct input.

Prototype design

For years I’ve been thinking about expressing my inner meditation state through a water metaphor. Movement of water is endlessly fascinating and mysterious and to my mind perfectly suited for my intentions. I looked for inspiration online which set the boundaries for which software environment to choose.
After exploring various platforms, languages and libraries I ended up with good old Processing as a platform. I found this sketch online which offered a nice starting point to build on. I started modifying it.

Exploring the Box Waves Processing sketch

Exploring the Box Waves Processing sketch


Considering I wanted a complex and lively wave animation I choose pitch (nodding movement of the head), breathing (top and bottom), finger pressure and heart-rate as input sensors.
SoftMed Prototype

Interaction with the audience

I have been thinking about how to make the performance multi-directional. I wanted to somehow include the audience into what is happening on the screen. What both me and the audience share are the sounds in the room. I decided to use the marker button provide with the suit to change the animation speed depending on the loudness of the sounds. Over time the audience would notice the relation between speed and sounds was my idea.

The first performance

I was invited to give a short presentation at the Human-Technology Relations: Postphenomenology and Philosophy of Technology conference at the University of Twente. Instead of a talk I decided I would test my prototype. I could only last for 5 minutes. I had programmed the sound of a bell at the beginning and end. I was facing the wall while the audience looked at a big screen over my head.


I was a bit nervous on how it would be to meditate in front of some 30 strangers. But once I sat down it was just like I always do: notice my body (pounding heart) and mind.

I was less pleased with the demo effect. One sensor was not working properly (I still don’t know why). This created hard-edged shapes and motions from right to left the exact opposite of the intended animation.

I tried pressing the marker button when I heard something. But as the performance progressed the room became more and more silent. Which I suppose is a sign that it worked but not something I had counted on.

Measurements

I am of course interested in the effects of the performance. I supplied the audience with the Brief Mood Introspection Scale (BMIS). Four sub-scores can be computed from the BMIS: Pleasant Unpleasant, Arousal-Calm, Positive-Tired and Negative-Relaxed Mood. I asked to fill them in before (baseline) and after the performance. 10 questionnaires were returned of which 6 were complete and correct. I am working on the results and will report on them in a later post.

Reactions

I was pleased to hear that people were fascinated by the wave and tried to work out what it signified. People found the performance interesting and aesthetically pleasing. We discussed what caused the effects: the context, the staging of me sitting there and people wanting to comply, the animation or the silence? A lot of things to explore further!
One participant came up to me later and explained how much impact the performance had on him. He found it very calming. “Everything just dropped from me” he explained. It also made him think about silence in his life and looking inward more. This is all I can hope to achieve. I continue my research with new energy and inspiration.

The next version of the performance will be on show during the biggest knowledge festival of south Netherlands (het grootste kennisfestival van zuidnederland) in Breda on September 13th.

References
- Feng, Chao & Bartram, Lyn & Gromala, Diane. (2016). Beyond Data: Abstract Motionscapes as Affective Visualization. Leonardo. 50. 10.1162/LEON_a_01229.
- Lockyer, Matt, Bartram Lyn. (2012). Affective motion textures. Computers & Graphics
- K Piff, Paul & Dietze, Pia & Feinberg, Matthew & Stancato, Daniel & Keltner, Dacher. (2015). Awe, the Small Self, and Prosocial Behavior. Journal of personality and social psychology. 108. 883-899. 10.1037/pspi0000018.

Relief! The organisational problems reduce

Last week, I was busy with sewing the sketch of the Silence Suit. But I also was busy with the project in my head. I really was wondering how Danielle’s week would pass. Last time she seemed stressed and exhausted because of many organisational problems. So before I came this time I really hoped that many problems would have been solved. I was excited if the wearable would fit and if we could start looking for the sensors and the wiring.

So the question in the beginning “How are you? And how was your week?” had an extra meaning today. She seemed relieved: “I could eliminate many stress factors”, she says. That means she knows every team member can meet the milestones. Moreover, she does not have to solve the organisational problems all on her own anymore, but her mentor will take them on.

Because she didn’t have to focus on these things anymore, Danielle could concentrate on the corporation with Design Lab. Students from the University of Twente will work on the design, the developing and the production of the PCB’s for the microcontroller. Furthermore, they will optimise the suit and the cabling, as well as the interaction with the suit.

“So this is all very good news. But my highlight was the visit of Tom Bergman from Philips Research.” Last Friday he came to show the opportunities to influence the sphere of the environment by light. Danielle could try if the light has some influence on her mood by putting on the wearable. She could see that intensity and colour of the light can be detected by the sensors of the suit by sitting in front of the apparatus of Philips. The experiment was successful and we know that the apparatus is strong enough to influence the environment.

testing the influence of light - with Tom Bergman from Philips

testing the influence of light – with Tom Bergman from Philips

It is a well known phenomenon that you can recharge your energy by experiencing nature. It is called restoration and Danielle already worked with it in one of her projects Virtual View. But is seems that this phenomenon also works with light. Maybe light could work as influence as well as expression of the quality of the meditation session.

Besides that, we want to focus on the sensors and the cabling today. The sensor in the neck that detects movement has to be optimised. We make a first try by just sticking the sensor on the neck. We want to try where the sensor has to sit, so it detects the movement the best. We validate what Danielle already expected: The sensor has to sit as high as possible on your neck.

neck sensor - logging different positions

neck sensor – logging different positions

Moreover, we have to work on the sketch of the wearable I made. Because the sensor has to be included in the neck of the wearable. On the one hand, the sensor has to sit that tight that it detects every movement, but on the other hand it still has to sit comfortable. We try different options. We choose a turtleneck with some Velcro at the front to open, close and tighten the suit. But that is not enough. Also the other option we tried with some elastic does not work. It is still too slack and the sensor does not move enough to detect. So next week we have to look for another option. We already have some idea’s in our hearts but maybe you know a solution for our issue. So how would you solve that problem? Let us know by leaving a comment.

turtleneck - trying to include the neck sensor in the sketch of the vest

turtleneck – trying to include the neck sensor in the sketch of the vest

So you see by outsourcing tasks which stress you, you can focus on the things you like the most. It was really nice to see Danielle that happy and relieved today. I learned for myself how worthwhile it an be to ask one another for help. You see the value of a team. Maybe the things you are struggling with are for others much less complicated.

Bewaren

Bewaren

Introducing Silence Suit

first sensors

Meditation stool with soft sensor and heart-rate sensor

For over a year I’ve been working on a meditation wearable. It measures biometric and environmental input. Its goals is to use the measurements to improve your meditation and use the data to generate artistic visualisations. The wearable is part of a bigger project Hermitage 3.0, a high-tech living environment for 21st century hermits (like me). Now that the wearable project is taking shape I’d like to tell a little about to process of creating it.

The sensors
I started with a simple but surprisingly accurate heart-rate sensor. It works with the Arduino platform. It uses an ear-clip and sends out inter beat intervals and beats per minute at every beat. With some additional code in Processing I can calculate heart-rate variability. These are already two important measures that can tell a lot about my state while meditating. Then I added galvanic skin response to measure the sweatiness of my skin, a nice indicator of stress or excitement. I added an analogue temperature sensor that I put on my skin to measure its temperature. Low skin temperature also indicates a state of relaxation. I also made a switch sensor that is attached to my meditation stool. Sitting on it indicates the start a session, getting up marks the end.
All sensors were connected with a wire to my computer but the aim was, of course, to make it wireless so I’d be free to move. But I could already see day to day changes in my measurements.

A little help from my friends
As things were becoming more complex I posted a request for help in a Facebook group. A colleague, Michel offered to help. We first looked at different ways to connect wirelessly. Bluetooth was a problem because it has very short range. Xbee also wasn’t ideal because you need a separate connector. We also made a version where we could write to an SD card on the device. But this of course doesn’t offer live data which was crucial for my plans. We finally settled for WiFi using the Sparkfun Thing Dev ESP8266. We were going to need a lot of analogue pins which the thing dev doesn’t offer. So we used the MCP3008 chip to supply 8 analogue i/o pins.

Overview of all the sensors

Overview of all the sensors

More is more
We could then increase the amount of sensors. We’ve added an accelerometer for neck position, replaced the analogue skin temperature sensor with a nice and accurate digital one. Around that time a wearable from another project was finished. It is a vest with resistive rubber bands that measures expansion of the chest and belly region. Using the incoming analogue values I can accurately calculate breath-rate and upper and lower respiration. Then it was time to add some environmental sensors. They give more context to for example GSR and skin temp readings. We’ve added room temperature and humidity, light intensity and RGB colour and air flow.

Vest with sensors

Vest with sensors

Environmental sensors

Environmental sensors

Seeing is believing
From the start I’ve made simple plots to get a quick insight into the session data. For now they don’t have an artistic purpose but are purely practical. At this point it is still essential to see if all sensors work well together. It’s also nice to get some general insight into how the body behaves during a meditation session.
Data is also stored in a structured text file. It contains minute by minute averages as well as means for the whole session.

Session data plot with legend

Session data plot with legend

I’ve also made a Google form to track my subjective experience of each session. I rate my focus, relaxation and perceived silence on a 7 point likert scale and there is a text field for a remark about my session.

Results from Google form: very relaxed but not so focussed...

Results from Google form: very relaxed but not so focussed…

Suit
I used the vest from the other project to attach the sensors to. But last week costume designer Léanne van Deurzen has made a first sample of the wearable. It was quite a puzzle for her and her interns to figure out the wiring and positioning of every sensor. I really like the look of this first design. It’s fits with the target group: high-tech hermits and it also is very comfortable to wear.

Upper and lower part of the suit

Upper and lower part of the suit

Back with extension where soft sensors to detect sitting will be placed

Back with extension where soft sensors to detect sitting will be placed

The future
The next step will be adding sensors for measuring hand position and pressure and a sound-level sensor.
Then we will have to make the processing board a bit smaller so it can fit in the suit. We can then start integrating the wiring and replacing it by even more flexible ones.
When all the sensors are integrated I can really start looking at the data and look for interesting ways to explore and understand it.
I’m also looking for ways to fund the making of 15 suits. That way I can start experiments with groups and find ways to optimise meditation by changing the environment.

sleepGalaxy: design & calories

Design

Design

I’ve been working on the overall design step by step, alternating between coding and looking. I want to incorporate my calorie intake after 6 PM. I’m not recording the times I ate and I suspect they influence my whole sleep. So the most logical position is to circle all around the “sleep circles”. There is a lot of difference in daily intake after 6 PM, ranging from zero to 900 calories so far. I wanted to plot every calorie so they would have to change sizes depending on the amount. I also wanted to spread the calories evenly around the entire circle. How to go about that? Fortunately, I’ve found this great tutorial. The code is deprecated and the feed doesn’t seem to work any more but I managed to recycle the code concerning the plotting of the elements in a circle.

calorieViz1

Plotting numbers instead of dots

The code uses translate and rotation, which (for me) are very hard to grasp concepts. So instead of using the dots in the design I used numbers to get insight into how the elements are placed on the screen.
By keeping the size of the calorie circle constant, you can already see relations between the sleep duration, the amount of calories eaten and recovery.

cals2

Evening with a lot of calories

cals1

Evening with less calories

In the design you can also see an eclipse. These are the stress and happiness values for the whole day. I poll them by picking a number between 1 and 7 in the form at the end of the day. The mood is the bright circle. The stress circle covers the brightness depending on the amount of happiness felt during the day. By vertically changing the position, I can create a crescent. This can turn into a smile or a frown. The opacity of the black circle indicates the amount of stress. I’m coding this at the moment.

Bewaren

sleepGalaxy: recovery

As I explained in my previous post I find the recovery measurement very useful. It seems a good representation of how rested I feel. It is calculated using RMSSD. The Emfit knowledge base explains it like this: “… For efficient recovery from training and stress, it is essential that parasympathetic nervous system is active, and our body gets sufficient rest and replenishment. With HRV RMSSD value one can monitor what his/her general baseline value is and see how heavy exercise, stress, etc. factors influence it, and see when the value gets back to baseline, indicating for example capability to take another bout of heavy exercise. RMSSD can be measured in different length time windows and in different positions, e.g. supine, sitting or standing. In our system, RMSSD is naturally measured at night in a 3-minute window during deep sleep, when both heart and respiration rates are even and slow, and number of movement artifacts is minimized…” Here is an example of how recovery is visualised in the Emfit dashboard:

Emfit dashboard

Emfit dashboard

I looked for a way to integrate this measure in a way fitting with my “planet metaphor”. I’ve chosen a kind of pivot idea. It vaguely reminds of the rings around planets.

Using the mouse pointer to enter different values of recovery

Using the mouse pointer to enter different values of recovery

I thought it would be easy to just draw a line straight through the middle of the circles. I wanted it to tilt depending on the height of the score. It was harder then expected. I ended up using two mirroring lines and vectors. Starting point was the excellent book by Daniel Shiffman, The nature of code.

Integrating with circle visualisations.

Integrating with circle visualisations.

Once I got the basics working, I went on to refine the way the line should look projected over the circles. Going up from the lower left corner indicates positive recovery, visualised by the green coloured line. The more opaque the better the recovery. Of course, negative recovery goes the other way around.

Slight recovery

Slight recovery

The is a difference in the starting points from which the recovery is calculated. Sometimes my evening HRV is very high. This results in a meagre recovery or even a negative recovery. I might think of an elegant way to incorporate this in the visual. May be I have to work with an average value. For the moment I’m still trying to avoid numbers.

Almost maximum recovery

Almost maximum recovery

Negative recovery

Negative recovery

sleepGalaxy: kick off

Finally, I’ve started to work on a piece that’s been on my mind for almost two years. Ever since I met the nice people from Emfit at the Quantified Self conference. They kindly gave me their sensor in return for an artwork I would make with it.

Emfit QS

Emfit QS sleep sensor

You put the sensor in your bed, go to sleep and it wirelessly sends all kinds of physiological data to their servers: movement, heart rate, breath rate. All this data together they use to calculate the different sleep stages. From the heart rate they’ve recently started calculating HRV and recovery. This latter value to me is best indicator of my sleep quality and how energetic I feel.
Emfit offers a nice interface to explore the data and view trends.
emfitInterface

In sleepGalaxy I want to explore the relationship between sleep quality and the following variables: exercise, social and work meetings, calorie and alcohol intake, screen time and overall happiness and stress during the day. I’m under the impression that these have the most impact on my sleep, that is, the sleep phases, the ability to stay asleep and recovery.

Google form

Google form

To track the variables I’ve created a Google form that I fill in every night before I go to sleep. I’ve set an alarm on my iPad so I don’t forget.

Excel sheet with some of the Emfit data

Excel sheet with some of the Emfit data

firstNight

First circle visualisation

From all the Emfit data I’ll be using a subset. My first sketches focus on the sleep phases. I’ve spend a couple of hours programming first the basic idea: transforming the sleep phases into concentric circles. Going from awake to light sleep, REM sleep and deep sleep in the centre.

The next step was to make sure the different phases are displayed correctly, representing the amount of time spend in each phase and total time in bed. I’m programming in Processing and I’ve created an class called Night. After reading in the Emfit excel data as a csv file I loop through the rows and create a night object representing every night.
Displaying the circles went fine but the proportions between the circles just didn’t look right. I realised I had a conflict working with minutes in a decimal context. I wrote a little function that converts the minutes of the hours into decimal values and then adds them to the whole hours:
float min2dig(String time){
String[] tmp = split(time,'.');
float t = float(tmp[0])+(float(tmp[1])/60);
return t;
}

Now the basis of the visualisation is ready. The image below displays sleep phases of the four nights in the excel data from above. I look forward to adding more data. To be continued…
firstNights

Virtual View: conducting experiment two

Our ideal for the execution of the second experiment was to have 60 participants of 40 years and older. There would be two labs where the experiment would be held in alternating rooms over 3 days. The rooms would be in a quite part of the school as we had quite a lot of disturbance during the first experiment.

The first setback was the location. It wasn’t possible to have two classrooms for three days at the same time. There weren’t any rooms available in a quiet part of the school. Eventually there was no other choice then to use a room in the middle of the busy documentation centre and spread the experiments out over 5 days. The room was a kind of aquarium, it was very light and you could see people walking around through the glass walls. During the test there was disturbance from talking and students opening the lab door by mistake. So far from ideal.

But my main disappoint was with the sample. Only one day before the start of the experiment the students notified me that they had managed to only get 20 participants instead of the 60 we had agreed upon. We were mostly depending on the teachers for participation but it was the period of the preliminaries and they were very busy. Also the trial would now take 40 minutes instead of the 20 to 30 minutes the first experiment took. Had I known earlier I could have taken steps and come up with a suitable solution.
As it was I had to improvise. I had to let go of the control group and had to broaden the age range. In the end 6 students of below 30 years old took part. I asked around in my own network and managed to recruit 10 people in the right age group. In the end we tested 40 people, all of whom were exposed to the stress stimulus.

Unfortunately not all the results were valid and useful. Some data was lost due to technical problems. Also quite a number of people made mistakes with filling in the questionnaires. We now had two questionnaires, one for self reported stress and one for self reported relaxation. The stress questionnaire contained one question in the positive direction (I feel everything is under control) and two negative items (I feel irritated, I feel tense and nervous). Both had to be reported on a 10 point scale.stressQuestionnaireApparently this was confusing for some people and even thought notes were taken it wasn’t always possible to reconstruct the correct answer. In the next experiment will put also some text below the numbers to indicate the value.
There were also two very extreme results (outliers), they can’t be included in the data set as they would mess up the averages too much. So I ended up will 33 data sets I could use for my analyses.

But first the data had to be sorted and structured. It took me quite some time to streamline the copious EventIDE output into a useful SPSS dataset.

The baseline measurement included self reported stress (pink), heart-rate (orange) and heart-coherence (red) and self reported relaxation (green). baselineOutput
The three answers from all the questionnaires had to be combined into one value and checked for internal validity in SPSS.

It’s nice to take a look at a part of the results from the cognitive stress task:
cognitiveTask
From the output you can see exactly what the sums were, how much time it took to make them, what the answer was and if the given answer was correct or not. I didn’t use this data but it would be nice to see if for example participants with more faults have higher heart-rates. Heart-rate (orange) and heart-coherence (red) are again below the results.

Before each stimulus set there was the stress questionnaire and after each set the relaxation questionnaire. The output for each set, which consisted of 12 pictures with sound is laid out as follows:
Picture count | set number | image id | image name | inter beat interval | BPM | heart-coherence
setOutput
Each picture was shown for 20 seconds and the heart data was logged around four times per second. The output for one picture looks like this: 60.6|60.5|60.4|60.9|61.2|61.5|61.7|61.8|61.9|61.9|61.9|61.9|61.9|61.9|61.8|62.6|63.1|63.5|63.7|63.5|63.3|63.2|63.2|63.1|63.7|63.8|63.9|63.4|63.1|62.9|62.7|63.1|63.5|63.6|63.6|63.7|63.7|63.8|63.8|63.8|63.4|63.2|62.9|62.8|62.9|62.9|62.9|62.9|62.6|62.2|62.1|61.8|61.5|61.3|61.2|61.1|61.1|61.0|60.9|60.8|61.1|61.3|61.4|61.5|61.6|61.6|61.7|61.7|61.7|61.5|61.3|61.3|61.2|61.2|61.0|60.9|60.8|61.0|61.1|61.2|61.2|61.2|61.3|61.3|61.4|61.4|61.0|
This yields an average of 62.1 which is the output I used. But it is good to have all this data for each individual image. All the image averages had to be combined in a set average so I could easily analyse the differences between all three sets. I’m still analysing the data. More on that in my next post.

Virtual View: building an experiment

I was very lucky to meet Ilia from Okazolab. When I told him about Virtual View and the research I was planning to do he offered me a licence to work with EventIDE. This is a state of the art stimulus creation software package for building (psychological) experiments with all kinds of stimuli. Ilia has build this software which was, at the time I met him, still under development. Besides letting me use the software he offered to build an extension to work with the Heartlive sensor. He’s been very supportive in helping me to build my first experiment in EventIDE.

It is a very powerful program so it does take a while to get the hang of it. The main concept is the use of Events (a bit similar to slides in a PowerPoint presentation) and the flow between these events. Each event can have a duration assigned to it. On the events you can place all kinds of Elements ranging from bitmap renderers to audio players and port listeners. Different parts of the Event time line can have snippets of code attached to it. The program is written in .NET and you can do your coding in .NET and also use XAML to create a gui screen and bind items like buttons or sliders to variables which you can store.

You can quickly import all the stimuli you want to use and manage or update them in the library. From the library you drag an item onto a renderer Element so it can be displayed and gets a unique id. We’ll use this id to check to responses to the individual images.

The Events don’t have to follow a linear path. You can make the flow of the experiment conditional. So for my design I made a sub layer on the main Event time line which holds the sets of images and sounds. The images in each set are randomised by a script and so are the sets themselves as we want to rule out the effect of order of the presentation. So in the picture you can see the loop containing a neutral stimulus, 6 landscape pictures with a sound and a questionnaire. This runs 5 times and goes to the Event announcing the end of the experiment. During the baseline measurement and the sets the heart rate of the participant is measured. And the answers to the questions belonging to each set are logged.

Data acquisition and storage is managed with the Reporter element. You can log all the variables used in the program and determine the layout of the output. After the trial you can export the data directly to Excel or a text or csv file. Apart from just logging the incoming heart rate values we calculated mean from them inside EventIDE for each image and for the baseline measurement. This way we can see at a glance what is happening with the responses to the different images.

For me it was kind of hard to find my way in the program. What snippet goes where, how do I navigate to the different parts of the experiment? But the more I’ve worked with the program the more impressed I’ve become. It feels really reliable and with the runs history you are sure none of your precious data is lost.

First sonification workshop

From 20 to 24 of November last I took part in the first sonification workshop at OKNO in Brussels. This workshop is part of the European ALOTOF project. I’ll be working with them for the next two years on building a laboratory in the open field and making audio-visualisations of environmental and physiological data. Some thoughts on the workshop and subject:

- What is your idea about ’sonification’ or even ‘audiovisualisation’?
I would like to use sound/silence and light and for example air flow to influence my inner state. I’d like to measure environmental and physiological data, turn them into actuators and then measure again to see the results.

- What were you working on in the workshop?
I had to invest a lot of time in reading the values from my decibel meter through the serial port with Processing. As measuring noise is important for my plans I had to tackle that first. Unfortunately it took a lot longer then expected.
As I’m quite new to the world of sound I’ve explored some basic stuff using the minim library for Processing (http://code.compartmental.net/tools/minim/). After trying some frequency modulation and synthesis which sounded awful I ended up using layers of sine waves.
I used years of mood data that I read into Processing and sonified one row of data every second. I used three sine waves: 1. the current mood 2. the average mood for that day 3. the average mood for that year. The sine waves all had mapped mood values between 400 and 60. The better the mood higher the tone.

And I worked with real-time data from the decibel meter. Again using just sine waves now with low frequencies of up to around 100. I measured the decibel level and stored it to calculate the average for up to an hour. The other sine was the current decibel level. The low frequencies didn’t disturb the silence and acted like an echo.

- What are your plans for the future workshops?
My next step will be to work with physiological data from a muscle tension sensor (http://floris.cc/shop/en/sensors/807-muscle-sensor-v3-kit-.html) and hopefully with my heart and breath rate shirt (http://www.hexoskin.com/en). I’m hoping to produce sounds that will reduce tension and lower heart an breath rate. I’m thinking of reproducing natural sounds like birdsong and rustling of leaves, etc.

@cocoon

Last week I joined the Seeker project, a co-creation project by Angelo Vermeulen exploring life in space(ships). It’s been really inspiring so far as living in space combines knowledge from the fields of architecture, sustainability, farming, water and power supply and Quantified Self. The latter being my addition, of course :)

Together with architecture master students from the TU/e I’m looking into the interior of the ship which will be based on two caravans. As life in a spaceship is crowded and noisy my aim is to make a quick and dirty prototype that will:

  • detect the noise level
  • detect the users’ heart-rate
  • promote relaxation by pulsating light and sounds from nature

Noise level, heart-rate and soundtrack will (hopefully) be send to the base-station so people have a life indication of the status of the ship and the people living in it.

This is the sketch:

Today I’ll have a talk with the technicians for MAD to see what is possible. I’m thinking of using the following sensors:

Heart-rate: http://floris.cc/shop/en/sensors/731-pulse-sensor-amped.html

Noise level: http://floris.cc/shop/en/seeedstudio-grove/239-grove-sound-sensor.html

Playing sound: http://floris.cc/shop/en/shields/155-adafruit-wave-shield.html

The cocoon itself will be the good old paper lamp: