sleepGalaxy: final design

Displaying different activities with the right duration and start time

Displaying different activities with the right duration and start time

There were still a couple of variables to visualise once the basics design was ready. I had to work on integrating my pre-sleep activity. In the end I used three activity types: sport, social and screen (computer and television). Of the first two I’d logged duration by recording start and finish time. For screen time I just logged total duration because it was often scattered.
I was looking for a way to display all aspects (type, start, finish and duration) in a way that fitted with the nice, round shapes I’d been using so far. Then I realised the pre-sleep activities were recorded from 18:00h onwards. So the main circle could act as a dial. I could split up the space from 18 till 23:59 using the activity duration. I calculated the starting position of each activity as a degree on the dial and added the minutes the activity lasted. Using the arc shape with a substantial line thickness resulted in nice, bold strokes around my “night” circles. Each activity type has its own colour.

The final night design (rating still in green)

The final night design (rating still in green)

I was happy with the result but then the recovery line just looked plain ugly. I decided to use the same arc shape on the other side of the circle. The more recovery the thicker the stroke in green. The less recovery the thicker the line in red.

Finally there was the subjective rating of the sleep. I think it is important to incorporate how the night felt for me. Emfit uses a star system from 1 to 5 stars. So I played around with stars, ellipses and other shapes but finally settled on simple golden dots. A five star night would have the fifth and biggest dot in the middle of the deep sleep circle, this seemed fitting.

UFO like rating design

UFO like rating design

When the individual nights were finished it was time for the overall poster design. I somehow had got it into my head that this would be easy. But it was quite hard the capture the look and feel I was aiming for. I wanted the poster to be simple so that the individual nights would stand out and make a nice “galaxy”. On the other had I did want a legend and some explanation of what was on display.

Sketch of the poster design

Sketch of the poster design

My first idea was to go for a size of 70 x 100 cm, the nights would have a size of around 10 cm. This was too small for all the details to be visible. My final poster will be 91 x 150 cm. The nights are big enough and they all have enough space on the sheet while it is still possible to compare them. I found the nice, slim font Matchbook for the title, the legend and text. I’ll be sending the pdf to the printer next week.

sleepGalaxy: design & calories

Design

Design

I’ve been working on the overall design step by step, alternating between coding and looking. I want to incorporate my calorie intake after 6 PM. I’m not recording the times I ate and I suspect they influence my whole sleep. So the most logical position is to circle all around the “sleep circles”. There is a lot of difference in daily intake after 6 PM, ranging from zero to 900 calories so far. I wanted to plot every calorie so they would have to change sizes depending on the amount. I also wanted to spread the calories evenly around the entire circle. How to go about that? Fortunately, I’ve found this great tutorial. The code is deprecated and the feed doesn’t seem to work any more but I managed to recycle the code concerning the plotting of the elements in a circle.

calorieViz1

Plotting numbers instead of dots

The code uses translate and rotation, which (for me) are very hard to grasp concepts. So instead of using the dots in the design I used numbers to get insight into how the elements are placed on the screen.
By keeping the size of the calorie circle constant, you can already see relations between the sleep duration, the amount of calories eaten and recovery.

cals2

Evening with a lot of calories

cals1

Evening with less calories

In the design you can also see an eclipse. These are the stress and happiness values for the whole day. I poll them by picking a number between 1 and 7 in the form at the end of the day. The mood is the bright circle. The stress circle covers the brightness depending on the amount of happiness felt during the day. By vertically changing the position, I can create a crescent. This can turn into a smile or a frown. The opacity of the black circle indicates the amount of stress. I’m coding this at the moment.

Bewaren

sleepGalaxy: recovery

As I explained in my previous post I find the recovery measurement very useful. It seems a good representation of how rested I feel. It is calculated using RMSSD. The Emfit knowledge base explains it like this: “… For efficient recovery from training and stress, it is essential that parasympathetic nervous system is active, and our body gets sufficient rest and replenishment. With HRV RMSSD value one can monitor what his/her general baseline value is and see how heavy exercise, stress, etc. factors influence it, and see when the value gets back to baseline, indicating for example capability to take another bout of heavy exercise. RMSSD can be measured in different length time windows and in different positions, e.g. supine, sitting or standing. In our system, RMSSD is naturally measured at night in a 3-minute window during deep sleep, when both heart and respiration rates are even and slow, and number of movement artifacts is minimized…” Here is an example of how recovery is visualised in the Emfit dashboard:

Emfit dashboard

Emfit dashboard

I looked for a way to integrate this measure in a way fitting with my “planet metaphor”. I’ve chosen a kind of pivot idea. It vaguely reminds of the rings around planets.

Using the mouse pointer to enter different values of recovery

Using the mouse pointer to enter different values of recovery

I thought it would be easy to just draw a line straight through the middle of the circles. I wanted it to tilt depending on the height of the score. It was harder then expected. I ended up using two mirroring lines and vectors. Starting point was the excellent book by Daniel Shiffman, The nature of code.

Integrating with circle visualisations.

Integrating with circle visualisations.

Once I got the basics working, I went on to refine the way the line should look projected over the circles. Going up from the lower left corner indicates positive recovery, visualised by the green coloured line. The more opaque the better the recovery. Of course, negative recovery goes the other way around.

Slight recovery

Slight recovery

The is a difference in the starting points from which the recovery is calculated. Sometimes my evening HRV is very high. This results in a meagre recovery or even a negative recovery. I might think of an elegant way to incorporate this in the visual. May be I have to work with an average value. For the moment I’m still trying to avoid numbers.

Almost maximum recovery

Almost maximum recovery

Negative recovery

Negative recovery

sleepGalaxy: kick off

Finally, I’ve started to work on a piece that’s been on my mind for almost two years. Ever since I met the nice people from Emfit at the Quantified Self conference. They kindly gave me their sensor in return for an artwork I would make with it.

Emfit QS

Emfit QS sleep sensor

You put the sensor in your bed, go to sleep and it wirelessly sends all kinds of physiological data to their servers: movement, heart rate, breath rate. All this data together they use to calculate the different sleep stages. From the heart rate they’ve recently started calculating HRV and recovery. This latter value to me is best indicator of my sleep quality and how energetic I feel.
Emfit offers a nice interface to explore the data and view trends.
emfitInterface

In sleepGalaxy I want to explore the relationship between sleep quality and the following variables: exercise, social and work meetings, calorie and alcohol intake, screen time and overall happiness and stress during the day. I’m under the impression that these have the most impact on my sleep, that is, the sleep phases, the ability to stay asleep and recovery.

Google form

Google form

To track the variables I’ve created a Google form that I fill in every night before I go to sleep. I’ve set an alarm on my iPad so I don’t forget.

Excel sheet with some of the Emfit data

Excel sheet with some of the Emfit data

firstNight

First circle visualisation

From all the Emfit data I’ll be using a subset. My first sketches focus on the sleep phases. I’ve spend a couple of hours programming first the basic idea: transforming the sleep phases into concentric circles. Going from awake to light sleep, REM sleep and deep sleep in the centre.

The next step was to make sure the different phases are displayed correctly, representing the amount of time spend in each phase and total time in bed. I’m programming in Processing and I’ve created an class called Night. After reading in the Emfit excel data as a csv file I loop through the rows and create a night object representing every night.
Displaying the circles went fine but the proportions between the circles just didn’t look right. I realised I had a conflict working with minutes in a decimal context. I wrote a little function that converts the minutes of the hours into decimal values and then adds them to the whole hours:
float min2dig(String time){
String[] tmp = split(time,'.');
float t = float(tmp[0])+(float(tmp[1])/60);
return t;
}

Now the basis of the visualisation is ready. The image below displays sleep phases of the four nights in the excel data from above. I look forward to adding more data. To be continued…
firstNights

Virtual View: programming animation

I’m still working hard on my animation. It’s going a bit slower then anticipated (what else is new) but I’m confident that I’ll have a nice, representative animation finished for the experiment. As an inventory, these are the elements that I want in the testing (and probably) final landscape: horizon with hills, sky, water body, shoreline, trees on the hills. And the animation elements: clouds, individual birds and flocks of birds, butterfly, bee, leafs blowing, ripples on the water. Forces I’m working with now are wind and gravity but I might include some more to make for example the water ripples move naturally.
So far I’ve build the look and feel of the landscape, tweaking it a little here and there as I go along. I’m very happy with the clouds. They consists of a lot of circles positioned using the Perlin noise algorithm. I’ve got big ones at the top and smaller ones a bit lower.

Some frames of clouds moving

Some frames of clouds moving

I’ve brought down the number of hills visible as I think too many lines make a chaotic landscape which gives a restless feeling. The gradients for the sky and the water surface are the same, that just is more logical.
I’ve also included a shoreline to account for the appearance of the blossom leafs and butterflies.
I finally managed to give the blowing pink blossom leafs a natural look. It was quiet a challenge to make them rotate and move in the joyful and fascinating way leafs do.

Some frames of blossom animation

Some frames of blossom animation

Next step will be to continue with the water ripple animation and the birds. Finally I will be working on the trees on the hills. All elements will be kept as simple as possible. The movement tell most of the story not the resemblance.

At this moment I can start animation elements at will. Which is nice for constructing a story. I can use it for the experiments with the prototype as well to test the effect of certain animating elements. But eventually the animations should start depending on heart-rate variables. That’s what I’ll have to find out when experimenting with the prototype.

Virtual View: developing animation

The past month I’ve been working on my landscape animation. By chance I discovered a great book by Daniel Shiffman called The nature of code. The book explains how to convert natural forces into code. I’m working through the book picking the forces and algorithms that suit my needs. So far the noise function in Processing has proven very useful. It allows for creating more natural variation (as opposed to the random function.) I use it in creating the landscape horizons and some forms of animation.

PerlinNoiseHills

Test for creating hills with Perlin noise

In a previous post I described how I calculated the colours used in a woodblock print from Hokusai. Since then I have discovered the colorlib library. A super fast library for creating palettes and gradients from existing pictures. You can sort colours and manipulate the palette using various methods. This means I can change my colours dynamically depending on user input.

Colorlib palette from Hokusai picture. Sorted on the colour green.

Colorlib palette from Hokusai picture. Sorted on the colour green.

Apart from working through the book and creating basic animations I’m working on the look and feel of the landscape.

As I explained earlier this is based on the work of Hokusai. To my delight I discovered that a colleague is one of the few Dutch experts on Japanese woodblock printing, having received training in Japan. On top of that Jacomijn den Engelsen is also an artist whom I’ve admired for years. I met with her yesterday in her studio to learn more about this fascinating technique.

Jacomijn

Jacomijn demonstrating the Japanese woodblock printing technique.

The characteristic look of the pieces comes from the use of water based paint on wet rice paper. For every colour a separate woodblock is used. The typical black outlines are also printed from a separate block.

Screen print from animation. Colorlib gradient used for sky and water.

Screen print from animation. Colorlib gradient used for sky and water.

The prints have a very flat, 2D feel. That is what I like, it is a kind of primitive picture of a landscape. The view people will be seeing won’t be a 3D simulation of nature but an artistic representation, a work of art with healing properties.

I’m not a painter or draughtsman so I was very happy with the tips Jacomijn gave me on how to make the landscape more convincing while still keeping the ‘Japanese flatness’.

Virtual View: colour pallet

I’ve written a little program to create a colour pallet for my landscapes. At the moment I’m studying articles on animation. Again they lead me to Japanese and Chinese drawing and block printing. I wasn’t planning to go there but there is such a strong link between my views on nature and eastern religious and philosophical traditions that it is just the most logical and pleasant route to take for me. I’ll dive deeper into this is my next post.

Below you see a scan of a Japanese block printed landscape by Hokusai. I like the colour pallet and I was wondering if I could find and easy way to just use the colours in my animation. After some programming (it’s been a while…) I’ve managed to extract the unique colours from the picture and display them. There are over 114000 colours in this picture! I’ve reproduced the original picture on top. It is so nice to see how just plotting the colours already creates a something that resembles an abstract landscape.

hokusaiPallet

This piece of code enables me to extract all the colours from any digital image and use that as a basis for my computer graphics.

library atmosphere

The past couple of weeks I’ve been working on an assignment for the municipal library. The task was to let people present their views on the library of the future. To that end we created an area with seats, a bar, a touch table and lights. By touching a picture on the screen visitors could select a different atmosphere and actually change that by at the same time changing the colour of the lights. The choices of the visitors were logged in a file. This installation was presented during the Cultuurnacht (culture night) in the city of Breda, the Netherlands.

My task was to make the interactive application and drive the lights. I’ve wanted to experiment with interactive lighting so I can apply it in my Hermitage 3.0 project. So for me it was a great opportunity to learn about it. And learn I did.

My idea was to work with the Philips Hue. They have a great API and an active community. But due to budgetary restrictions I had to work with an alternative: Applamp, also known as Milight. The concept is the same: a wifi connected bulb can change colour and brightness by sending code to a local server port generated by a small wifi box. Applamp also has a phone app to work with the lights and a very basic API.

I had wanted to start working on the application before Christmas but this ideal scenario didn’t work out. The bulbs arrived mid January… The first task was to connect to the lights using the app. It appeared that my Android phone was too old for the app to work. So I had to borrow my neighbours’ iPad. The bulbs can be programmed into groups but you have to follow the steps in communicating with the lights otherwise it won’t work.

Applamp with iPad app

Once the bulbs were programmed I thought it would be easy to set up a simple program switching a bulb on and off. I’d found a nice Python API and some other examples in different languages. Non in Java or Processing though. I’ve used Processing because I wanted a nice interface with pictures, a full screen presentation and log the actions to a file.

I tried and tried but the UDP socket connection wasn’t working. So the biggest thing I learned was to do with network. I received a lot of help from Ludwik Trammer (Python API) and Stephan from the Processing forum. The latter finally managed to retrieve my local IP address and the port for the Milight wifi box, which was all I needed. (You actually don’t need the precise port, sending it to .255 is good enough.) The light technician Jan showed me a little app called Fing that makes it super easy to get insight into all the things connected to your local IP.

In Processing I wrote the interaction making sure that no buttons could be pressed while the program was driving the bulbs. There should be at least 100 ms between the different commands you send to the bulbs. This made the program a bit sluggish. But if the commands are send to quickly it doesn’t reach the bulbs and the colour doesn’t change. I had to fiddle around with it to get it stable. But the settings in my home weren’t optimal for the library. Alas there was not enough time to experiment with it there. So it wasn’t perfect but people got the idea.

This is a snippet of the program in Processing:

// import UDP library
import hypermedia.net.*;

UDP udp;  // the UDP object
int port = 8899; // new port number
String ip = "xx.xx.xx.255"; // local ip address

int[] colourArray = {110, -43, -95, 250, 145};
int currentAtmosphere = -1;

void setup(){
  udp = new UDP(this, port);
  startState = true;
  RGBWSetColorToWhiteGroup1();
}

void mouseClicked(){
  currentAtmosphere = 1;
  RGBWSetColor(byte(colourArray[currentAtmosphere]), false);
}

void RGBWGroup1AllOn(){
  udp.send(new byte[] {0x45, 0x0, 0x55}, ip, port);
}

void RGBWSetColorToWhiteGroup1(){
  RGBWGroup1AllOn(); // group on
  myDelay(100);
  udp.send(new byte[] {byte(197), 0, 85}, ip, port); // make white
  udp.send(new byte[] {78, 100, 85}, ip, port); // dim light
}

void RGBWSetColor(byte hue, boolean tryEnd){
  RGBWGroup1AllOn();
  myDelay(100);
  udp.send(new byte[] {0x40, hue, 0x55}, ip, port); // send hue
  myDelay(100);
  if(tryEnd){
    udp.send(new byte[] {78, 100, 85}, ip, port); // dim light
  }
  else{
    udp.send(new byte[] {78, 59, 85}, ip, port); // full brightness
  }
}

Another thing that’s puzzling is the hue value that has to be send. As all the codes send to the bulbs should be in byte size the hue must be a value between 0 and 255. The hue scale of course is from 0 to 360 degrees. I’ve figured out how they are mapped but found out by just trying all the values from 0 to 255.

I’m happy to say that the installation was a success. People thought it was fun to work with it and I had some nice insights into peoples idea’s for the library of the future. The final presentation could have been more subtle. But that’s something for next time.

 

First sonification workshop

From 20 to 24 of November last I took part in the first sonification workshop at OKNO in Brussels. This workshop is part of the European ALOTOF project. I’ll be working with them for the next two years on building a laboratory in the open field and making audio-visualisations of environmental and physiological data. Some thoughts on the workshop and subject:

- What is your idea about ’sonification’ or even ‘audiovisualisation’?
I would like to use sound/silence and light and for example air flow to influence my inner state. I’d like to measure environmental and physiological data, turn them into actuators and then measure again to see the results.

- What were you working on in the workshop?
I had to invest a lot of time in reading the values from my decibel meter through the serial port with Processing. As measuring noise is important for my plans I had to tackle that first. Unfortunately it took a lot longer then expected.
As I’m quite new to the world of sound I’ve explored some basic stuff using the minim library for Processing (http://code.compartmental.net/tools/minim/). After trying some frequency modulation and synthesis which sounded awful I ended up using layers of sine waves.
I used years of mood data that I read into Processing and sonified one row of data every second. I used three sine waves: 1. the current mood 2. the average mood for that day 3. the average mood for that year. The sine waves all had mapped mood values between 400 and 60. The better the mood higher the tone.

And I worked with real-time data from the decibel meter. Again using just sine waves now with low frequencies of up to around 100. I measured the decibel level and stored it to calculate the average for up to an hour. The other sine was the current decibel level. The low frequencies didn’t disturb the silence and acted like an echo.

- What are your plans for the future workshops?
My next step will be to work with physiological data from a muscle tension sensor (http://floris.cc/shop/en/sensors/807-muscle-sensor-v3-kit-.html) and hopefully with my heart and breath rate shirt (http://www.hexoskin.com/en). I’m hoping to produce sounds that will reduce tension and lower heart an breath rate. I’m thinking of reproducing natural sounds like birdsong and rustling of leaves, etc.

flickr problems

Downloading the big format photo’s from Flickr turned out to be more trouble then I expected. Downloading the small format pictures went like a breeze, as I explained here. But on almost all the big files I got this picture:

I suppose I got kicked out. I only realised this when I wanted to integrate the pictures in the pdf so that was a bit of a set back. I had to think of a way to download the photo’s and be able to link them to the dataset. I’ve used two programs to download all my pictures from Flickr: Bulkr and PhotoSuck. Both contained the Flickr photo id in their file names. I found and rewrote a script to list all the file names, loop through them and save the pictures under their id used in the dataset. I keep being pleasantly surprised by Java and Processing. Eventually I only had to download only on picture by hand:

The next step is scaling the differently sized pictures to match the width of the pdf. I think I might also use the titles and tags of the pictures in a subtle way, I’m not quite sure yet.