working on numuseum

After a long time I’ve taken up the numuseum website. It’s been nagging me for ages that it’s so outdated and not working properly any more. I’m keeping it simple but will be implementing some new things.

designI want to create a now part (“nu” means now in Dutch) and a museum part. Now always shows the most recent data. I’ll start of with a picture of the sky with time and location data. I will overlay that with personal data like mood and heart rate. The museum part will show the now part history in some interactive way.

I’ve found a cute, free font Jaapokki Regular that I’ll be using for the website.

The menu at the bottom gives access to the archive of net-art pieces, an about and contact page.

I’ve already started coding the sky part. I use a very neat FTP app (AndFTP) to send the sky pictures to the server. A PHP script sorts the pictures (most recent first) and grabs the date-time and locations data (from EXIF headers).

home

sleepGalaxy: final design

Displaying different activities with the right duration and start time

Displaying different activities with the right duration and start time

There were still a couple of variables to visualise once the basics design was ready. I had to work on integrating my pre-sleep activity. In the end I used three activity types: sport, social and screen (computer and television). Of the first two I’d logged duration by recording start and finish time. For screen time I just logged total duration because it was often scattered.
I was looking for a way to display all aspects (type, start, finish and duration) in a way that fitted with the nice, round shapes I’d been using so far. Then I realised the pre-sleep activities were recorded from 18:00h onwards. So the main circle could act as a dial. I could split up the space from 18 till 23:59 using the activity duration. I calculated the starting position of each activity as a degree on the dial and added the minutes the activity lasted. Using the arc shape with a substantial line thickness resulted in nice, bold strokes around my “night” circles. Each activity type has its own colour.

The final night design (rating still in green)

The final night design (rating still in green)

I was happy with the result but then the recovery line just looked plain ugly. I decided to use the same arc shape on the other side of the circle. The more recovery the thicker the stroke in green. The less recovery the thicker the line in red.

Finally there was the subjective rating of the sleep. I think it is important to incorporate how the night felt for me. Emfit uses a star system from 1 to 5 stars. So I played around with stars, ellipses and other shapes but finally settled on simple golden dots. A five star night would have the fifth and biggest dot in the middle of the deep sleep circle, this seemed fitting.

UFO like rating design

UFO like rating design

When the individual nights were finished it was time for the overall poster design. I somehow had got it into my head that this would be easy. But it was quite hard the capture the look and feel I was aiming for. I wanted the poster to be simple so that the individual nights would stand out and make a nice “galaxy”. On the other had I did want a legend and some explanation of what was on display.

Sketch of the poster design

Sketch of the poster design

My first idea was to go for a size of 70 x 100 cm, the nights would have a size of around 10 cm. This was too small for all the details to be visible. My final poster will be 91 x 150 cm. The nights are big enough and they all have enough space on the sheet while it is still possible to compare them. I found the nice, slim font Matchbook for the title, the legend and text. I’ll be sending the pdf to the printer next week.

Sleep statistics

Let me start with some characteristics of my sleep pattern. My mean hours of actual sleep is 7.19, of which 20.4% is REM sleep, light sleep 60.1%, deep sleep 15.7%. According to the Emfit QS website my REM sleep is on the low end and my light sleep on the high end needed for complete recovery. I suppose that’s why I often don’t feel really fit when I get out of bed. On average I spend 7.89 hours in bed.

I’ve been looking at the correlations between the sleep and context variables, using data from 35 nights. I’ve also included some other variables that I’ve measured during the same period. I’ll discuss some of the significant correlations I’ve found.

correlationsTable

There are some surprises here. Eating in the evening doesn’t seem to be the healthiest thing to do. It lowers my HRV and prevents deep sleep. I’ve stopped eating after diner.

Deep sleep in minutes. The graph makes very clear that having zero calories leads to the most minutes of deep sleep.

Deep sleep in minutes. The graph makes very clear that having zero calories leads to the most minutes of deep sleep.

The effect of sleep on blood pressure was also an eye-opener. When I sleep better the blood pressure lowers again.

My subjective sleep appreciation correlates positively and highly significant with all sleep phases and the time spend in bed as well as actually sleeping. It has no correlation to deep sleep though. I’ve heard people say that this is the main determinant for their perceived sleep quality. For me this seems to be just sleeping. To crank up my REM and light sleep I should allow myself to spend more hours in bed, there is a strong correlation.

All the other variables don’t affect my sleep. This could be due to them not occurring very often/not every night. I’ve looked at overall stress and happiness. They don’t seem to be connected to any of the sleep parameters. Happiness is positively correlated to the minutes I work out. This is of course often demonstrated in research but it was nice that it sneaked into this unrelated dataset.

Contrary to what I expected the following variables have no significant bearing on my sleep phases: social activity, meditation and evening screen time. Meditation I usually do in the mornings so I can imagine that the effect wears off. But screen time doesn’t affect my sleep contrary to what is claimed. Maybe that’s because I watch boring stuff ;-)

sleepGalaxy: design & calories

Design

Design

I’ve been working on the overall design step by step, alternating between coding and looking. I want to incorporate my calorie intake after 6 PM. I’m not recording the times I ate and I suspect they influence my whole sleep. So the most logical position is to circle all around the “sleep circles”. There is a lot of difference in daily intake after 6 PM, ranging from zero to 900 calories so far. I wanted to plot every calorie so they would have to change sizes depending on the amount. I also wanted to spread the calories evenly around the entire circle. How to go about that? Fortunately, I’ve found this great tutorial. The code is deprecated and the feed doesn’t seem to work any more but I managed to recycle the code concerning the plotting of the elements in a circle.

calorieViz1

Plotting numbers instead of dots

The code uses translate and rotation, which (for me) are very hard to grasp concepts. So instead of using the dots in the design I used numbers to get insight into how the elements are placed on the screen.
By keeping the size of the calorie circle constant, you can already see relations between the sleep duration, the amount of calories eaten and recovery.

cals2

Evening with a lot of calories

cals1

Evening with less calories

In the design you can also see an eclipse. These are the stress and happiness values for the whole day. I poll them by picking a number between 1 and 7 in the form at the end of the day. The mood is the bright circle. The stress circle covers the brightness depending on the amount of happiness felt during the day. By vertically changing the position, I can create a crescent. This can turn into a smile or a frown. The opacity of the black circle indicates the amount of stress. I’m coding this at the moment.

Bewaren

sleepGalaxy: recovery

As I explained in my previous post I find the recovery measurement very useful. It seems a good representation of how rested I feel. It is calculated using RMSSD. The Emfit knowledge base explains it like this: “… For efficient recovery from training and stress, it is essential that parasympathetic nervous system is active, and our body gets sufficient rest and replenishment. With HRV RMSSD value one can monitor what his/her general baseline value is and see how heavy exercise, stress, etc. factors influence it, and see when the value gets back to baseline, indicating for example capability to take another bout of heavy exercise. RMSSD can be measured in different length time windows and in different positions, e.g. supine, sitting or standing. In our system, RMSSD is naturally measured at night in a 3-minute window during deep sleep, when both heart and respiration rates are even and slow, and number of movement artifacts is minimized…” Here is an example of how recovery is visualised in the Emfit dashboard:

Emfit dashboard

Emfit dashboard

I looked for a way to integrate this measure in a way fitting with my “planet metaphor”. I’ve chosen a kind of pivot idea. It vaguely reminds of the rings around planets.

Using the mouse pointer to enter different values of recovery

Using the mouse pointer to enter different values of recovery

I thought it would be easy to just draw a line straight through the middle of the circles. I wanted it to tilt depending on the height of the score. It was harder then expected. I ended up using two mirroring lines and vectors. Starting point was the excellent book by Daniel Shiffman, The nature of code.

Integrating with circle visualisations.

Integrating with circle visualisations.

Once I got the basics working, I went on to refine the way the line should look projected over the circles. Going up from the lower left corner indicates positive recovery, visualised by the green coloured line. The more opaque the better the recovery. Of course, negative recovery goes the other way around.

Slight recovery

Slight recovery

The is a difference in the starting points from which the recovery is calculated. Sometimes my evening HRV is very high. This results in a meagre recovery or even a negative recovery. I might think of an elegant way to incorporate this in the visual. May be I have to work with an average value. For the moment I’m still trying to avoid numbers.

Almost maximum recovery

Almost maximum recovery

Negative recovery

Negative recovery

sleepGalaxy: kick off

Finally, I’ve started to work on a piece that’s been on my mind for almost two years. Ever since I met the nice people from Emfit at the Quantified Self conference. They kindly gave me their sensor in return for an artwork I would make with it.

Emfit QS

Emfit QS sleep sensor

You put the sensor in your bed, go to sleep and it wirelessly sends all kinds of physiological data to their servers: movement, heart rate, breath rate. All this data together they use to calculate the different sleep stages. From the heart rate they’ve recently started calculating HRV and recovery. This latter value to me is best indicator of my sleep quality and how energetic I feel.
Emfit offers a nice interface to explore the data and view trends.
emfitInterface

In sleepGalaxy I want to explore the relationship between sleep quality and the following variables: exercise, social and work meetings, calorie and alcohol intake, screen time and overall happiness and stress during the day. I’m under the impression that these have the most impact on my sleep, that is, the sleep phases, the ability to stay asleep and recovery.

Google form

Google form

To track the variables I’ve created a Google form that I fill in every night before I go to sleep. I’ve set an alarm on my iPad so I don’t forget.

Excel sheet with some of the Emfit data

Excel sheet with some of the Emfit data

firstNight

First circle visualisation

From all the Emfit data I’ll be using a subset. My first sketches focus on the sleep phases. I’ve spend a couple of hours programming first the basic idea: transforming the sleep phases into concentric circles. Going from awake to light sleep, REM sleep and deep sleep in the centre.

The next step was to make sure the different phases are displayed correctly, representing the amount of time spend in each phase and total time in bed. I’m programming in Processing and I’ve created an class called Night. After reading in the Emfit excel data as a csv file I loop through the rows and create a night object representing every night.
Displaying the circles went fine but the proportions between the circles just didn’t look right. I realised I had a conflict working with minutes in a decimal context. I wrote a little function that converts the minutes of the hours into decimal values and then adds them to the whole hours:
float min2dig(String time){
String[] tmp = split(time,'.');
float t = float(tmp[0])+(float(tmp[1])/60);
return t;
}

Now the basis of the visualisation is ready. The image below displays sleep phases of the four nights in the excel data from above. I look forward to adding more data. To be continued…
firstNights

Quantified Self Europe conference 2015

As always, I was very much looking forward to the conference. The program looked promising and I hoped to meet QS pals. And because I was giving an Ignite talk and testing my Virtual View installation with updated software (view below.) This is an account of the most striking things I heard and saw.

QSEU15ByStevenKristoffer

The how-to sessions were new. I suppose they’re great for subjects which are limited in scope. Like the one on meditation tracking by Gary Wolf. The idea that just tracking time and duration of your meditation sessions can give you insight into how your life is going was refreshing. I’ve got an idea to automatically log my sitting periods. This session has given it a new boost.

There were some sessions on HRV. I went to the one Marco Altini gave together with Paul LaFontaine. I got some useful information on the two modes of tracking: PPG (60 seconds in the morning) or situational tracking. Both have their specific uses. The Polar H7 belt is most reliable and comfortable for the latter as you can wear it for long periods. It was nice to see how Paul did many experiments combining datasets of activities (e.g. phone logs) with HRV data. The session was a nice intro but I would have liked more hands on information. But I talked with Marco later during the office hour. If I just want to measure global, all day changes in heart rate a device like the Fitbit Charge HR would also do. Marco was wearing one and was satisfied with it. He’s the expert, it’s on my whish list…

I really liked that the show & tell talks were programmed on their own. It gave a lot less choice anxiety. The one on speed-reading by Kyrill Potapoc was a real revelation. I’ve already installed the Spritzlet browser extension. As a dyslectic, any improvement in my reading speed is welcome.
I also enjoyed the way Awais Hussain approached datasets that already existed to gain insight in causal chains and decision points. All this in aid to get best start for the future. I think it is a poetic approach.

I skipped one breakout to stroll around the tables during the Office hour. This made me very happy. Emmanuel Pont has developed the Smarter Timer app. It lets you track you activities at room level using differences in strength of Wifi networks. It is a learning app so you can teach it your activities in certain places. A desktop app will also track your software use. Exactly what I need! And a big improvement from the piece I did way back in 2008 “Self portrait @ home“. (I scanned QR-codes every time I entered an room.)
I also had a nice chat with Frank Rousseau from Cozy. An open source platform that allows you control over your own data. If offers similar functionality to the Google suite (mail, calendar, file sharing, etc.) I’m trying it out at the moment. I hope that I’ll be using it on my own server one day.

Ellis Bartholomeus

Ellis Bartholomeus told a very refreshing story about her hand drawn smiley’s. She treated the little drawings as data and discovered much about her moods. It was nice to watch the different stages of her process of getting to grips with what icons to use and how to interpret them.
Jakob Eg Larsen shed some interesting light on one of my favourite topics: food logging. I liked the simplicity of his approach to just photograph one meal a day, his dinner. It was funny how he struggled with the aesthetics of the food. It made me wonder: how much do the colours of your food tell you about their nutritional value?
One of the most amusing and at the same time personal talks was from Ahnjili Zhuparris. She was looking for correlations between her menstruation cycles and other aspects of the life like music and word choice. Not all clichés appear to be true. Female cycles caused some complaining among a few of the male attendants. Moderator Gary Wolf dealt with that in a compassionate but effective way. I was very impressed.

Jakob Eg  Larsen

Reactions to the Virtual View installation

During the Office hour and at the end of the day people tried out the installation. I had 14 users in total. Of course I logged some general data ;-)
I logged baseline heart rate and the lowest heart rate achieved during the experience after the baseline was set. The mean heart rate is calculated over each animated wave. A wave lasts 7.5 to 13.75 seconds depending on the frequency spectrum data. The mean baseline heart rate was 79,68 and the mean lowest heart rate was 68,01. The difference between these two means is significant. There was quite some variation between users: the maximum heart rate during baseline was 96.08 the minimum was 54.76 resulting in a big difference of 41.3. The variation in lowest pulse during the experience was between 80.07 max. and 50.45 min. resulting in a difference of 29.6.
For me it was good to see that even in relaxed circumstances using Virtual View results in a reduction of heart rate. Every user showed reduction, the average reduction was 14% with a maximum of 32%!

Still from the animation

I’m really happy to have received valuable feedback. These are some of the remarks that stood out. Overall users really liked the installation and found it relaxing. A couple of people expected an end to the animation. But a view doesn’t have a beginning or end. I should find a way to make it more clear that people can leave at any time.
Even though I’ve improved the feedback on heart rate some people would still like a little more information about it. For example their baseline measurement at the start of the animation.
The use-case of daycare with difficult children or people under stress like refugees, was suggested.
One of the users said it would be nice to have sheep on the hills. I really like that idea. They shouldn’t be too hard to draw and animate. Their moving speed could for example also give an indication of heart rate.
There were some requests for Virtual Reality devices but I still don’t think this is a suitable technology for patients in healthcare institutions, the main target group.

Apart from the content, there’s always the social aspect which makes the QS conferences such great experiences. People just feel uplifted by the open and tolerant atmosphere, the sense of learning and sharing that we all feel. I can’t wait for next conference to come to Europe.

Virtual View: building the installation

During the discussion with the hospitals it became clear that I couldn’t just put my stuff in a room and leave it there, especially as the space was open to the public all day. So I had the idea of building a piece of furniture that would act both as a chair and a chest for the hardware. As it seemed rather complex to integrate everything in a foolproof manner, I contacted DIY wizard Aloys.

We discussed the basic requirements and decided on a building a sketch first on which we could improve in a future version. Of the essence was integration of PC, sound system, beamer and heart-rate sensor. It had to be stable and elegant at the same time. The chair also should act as an on-off switch, detecting user presence. Of course time and budget was limited. So Aloys first made a CAT drawing. He also made a cardboard sketch.
CAT drawing of installation
I wanted the operation of the installation to be simple, a one switch interface to switch the complete installation on and off. We managed that using a rod to prod the big switch of the PC, this acted as a primitive key for the staff. Aloys also provided a lock so I could open the chair and get to the hardware and electricity supply if that was needed. The mouse and keyboard were also locked the chair, making it impossible to stop the program without the rod key.
full view of installation
The installation showed a static image of the animation is no one was using it to attract attention, also some wind sounds are heard. The software saves a still every minute so a different image appears after each use. Once a user sits down (detected by a hardware switch using Arduino) she is prompted to attach the clip of the sensor to her earlobe. When the sensor is detected the animation and soundscape starts. The speakers are integrated in the chair and create a very spacious and lifelike sound. This creates a strong sense of presence. Users can stay and enjoy the installation for as long as they like.
When they get up the animation freezes and the sounds mute except for the soft wind sounds.

animation from user perspective

Most people found the experience relaxing and enjoyable. Some software issues emerged that I’m solving now. The chair was not very comfortable so that is something we will work on in the next version. Users weren’t very clear how the heart-rate was visualised. I’m improving that, creating more links between the audiovisuals and the physiological data without distorting the landscape feeling.
I also want the next version to be more mobile. That way I can easily take it for a demonstration.

Virtual View: statistics for experiment 3

In experiment three I wanted to see if adding movement to visual content had a bigger lowering effect on heart-rate and subjective stress then just using a still. And I wanted to know if variables like heart-rate and skin conductance could be restored to or below the baseline following a stress stimulus. Sound accompanied the visuals and I used the same soundtrack for both conditions.
The animation consisted of a main landscape layout with different animated elements over-laying that scene. The landscape consisted of a blue sky with white clouds slowly moving over it. Three hills with shrubs in different shades of green and a blue water body with a cream coloured shore. The animations were started mostly in sequence so there were just one or two animated elements to be seen. This is aside from the clouds and the waves on the water body, they were visible most of the time.staticAnimationStimulusAnimation still used in condition 2

Other animations are: big and small flocks of “birds” consisting of 150 and 5 “birds” respectively. They move in random directions within the frame. Blossom leaves flying from one side of the screen to the other. This animation also included a bee flying from one side of the screen in a slow, searching way. A final animation element are the butterflies. They flutter near the bottom of the centre of the screen and disappear after a random time span. The visuals are not realistic but simplified and based on the style of the old Japanese woodblock prints.
The sounds are inspired by nature but underwent a lot of computer manipulation. The sound is carefully synced with the imagery and movements on the screen.
In both conditions I measured subjective tension (7 point likert scale), heartbeats per minute, heart-coherence and skin conductivity. The experiment consisted of three stages: a baseline measurement (5 minutes), a cognitive stress task (around two minutes), the audio and visual stimulus part (5 minutes). Subjective tension was measured before the baseline measurement, after the stress task and after the stimulus. For a full description of the lab setup and experiment view the previous post.

Sample
The sample consisted of a total of 33 participants, more women then men (75% over 25%), this frequency was the same for both conditions. They were mainly recruited from the art centre where the experiment took place, there were a couple of students and some members from the general public. They were randomly assigned to each condition. The maximum age was 71, the minimum was 20 (mean 41,1). One dataset was corrupt so I ended up with 16 (mean age 39,6) participants in condition 1 (animated landscape) and 16 (mean age 42,7) in condition 2 (landscape still).

Correlations
I’ve used SPSS 20 to calculate the statistics. I was curious if the heart-rate or heart-coherence would correlate with the subjective tension and/or the skin conductance. I could find very few significant correlations between the different variables. There are only significant connections between the different measurements of one variable. So the beats per minute (BPM) of the baseline measurement correlates with that of the cognitive stress task measurement and of the stimulus (landscape) measurement. The same is true for the Gavanic skin response (GSR) and the heart-coherence (HC). The only interesting correlation I found was a negative correlation between the baseline HC and the self reported tension (SRT) of the baseline and the stimulus. The could indicate that, assuming that heart-coherence is a measure of alert relaxation, perceived tension at the start and during the task the opposite of this alert relaxation state. But the correlation is weak (-496 and -501) so not much can be concluded from that.

Condition comparison
Before comparing conditions (with or without motion) I had to check if the stress stimulus had worked and if there was an effect for the audiovisual stimulus in general. Below you see an overview of the variables self reported tension (SRT), beats per minute (BPM), heart-coherence (HC) and galvanic skin response (GSR). The values for these variables are the mean values for the duration of the different parts of the experiment: Baseline (t1), cognitive stress task (t2) and stimulus (audiovisual material, both conditions) (t3). You can also see the expected direction of the variables. The significant values are printed in green.
results
From the table you can tell that there is a significant difference between the baseline measurement and the cognitive stress task on the one hand and between the stress task and the stimulus. This is true for BPM, GSR and self reported tension. All values rose during the stress task and decreased during the stimulus presentation. As those measures are strong indicators for stress this indicates that the stress task worked and the tension showed significant variation during the experiment. Heart-coherence shows no significant changes.
For the heart-rate there was even a significant lowering of the mean compared to the baseline. Indicating that the BPM was even lower the when participants entered the experiment.

Of course I wanted to test if there was a difference in the variables between conditions, that way I could see if animation was more effective then using only a static image. As you can see from the table there were no significant results for either of the conditions apart from the skin conductivity (GSR). The skin conductivity is a measure for arousal, the more aroused the higher the value. I would expect the GSR to be low at the start, high during the stress task and again low during the stimulus presentation. The GSR values for the stimulus presentation were significantly lower then during the stress task but they were still significantly higher then during the baseline measurement. This indicates that the GSR levels haven’t gone back to the baseline let alone become lower then the baseline state. This might be due to the fact that it takes more time for the skin activity to go back to normal. The response is slower than for heart-rate measurements.
We can see a reduction in heart-rate for both conditions with a bigger reduction in heart-rate for the animation condition. But neither of these changes are significant.
For the self reported tension we see a significant lowering in the tension from the higher values during the stress task and stimulus presentation. This means that people felt significantly less tense watching the landscape than during the stress task. The perceived tension was also lower in the animation condition than during at the start of the experiment though not significantly so. We don’t see this effect in the static condition. For this condition the baseline was lower and the effect of the stress stimulus was stronger. The overall variation was bigger. So you can’t really draw any definitive conclusions from this data other then that the landscapes reduced arousal in both conditions.

Overall lack of significance of many of the variables in either conditions may be caused by small the sample or it may indicate that there isn’t enough difference between the conditions for it to be significant. This might be caused by the way the stimuli were presented. For the sound we used a high quality active noise cancellation headphone. The impact of the sound was big. The screen image on the other hand was rather small (84,5 x 61,5 cm). The effect of the visuals might therefore be less strong in comparison with the high impact of the sounds.

I was of course also interested in the overall differences between the conditions, especially for the landscape stimulus. When comparing the different measurement moments for BPM we can see that in every moment the heart-rate in the static image condition is lower. So the participant in the first condition already started out with a much higher heart-rate. During the stress task the difference is even bigger and during the landscape presentation the differences have become smaller. I had expected that the heart-rate in the first condition would be lower but the differences are so big to begin with that you can’t draw any conclusions from it.

So does animation have a more positive effect on heart-rate, heart-coherence, skin conductance and self reported tension? I’ve looked at the interaction between all these variables and animation but on non of the variables the effect is significant. The major effects are on heart-rate. A bit to my surprise there are absolutely no effects on heart-coherence. In the first condition we see even a (non-significant) lowering of coherence during the animation. I’m therefore not going to use this value to drive my animation as was my original intention.

Scene comparison
While analysing I got curious to see if there are differences between the scenes of the animation and sound in condition 1 and 2. The animation and accompanying sounds can be divided into 10 different scenes. During the construction of the video I tried to incorporate various animation elements. They become visible one after the other.
I looked at the effects on mean heart-rate because it showed the most results. I wrote a script to calculate the mean heart-rate for every scene and for both conditions. The results are show in the graph below.
scenesCompare

The variations between the scenes were not significant for the sound with still condition but they were at two points for the animated condition. You can view stills of the scenes below. There was a significant reduction in heart-rate of 4,8 between scenes 1 (mean 76,6) and 2 (mean 71,8). And a significant reduction in heart-rate between scenes 1 and 9 (mean 71, 5) of 5,1. This could suggest that more is happening to the participants in the animation condition and that animation has more potential for influencing the heart-rate of users.

allscenes
Stills from the 10 different scenes

Virtual View: experiment 3 setup

For the design of the third experiment I got advice from Petra van der Schaaf, environmental psychologist. The main research question for this experiment is: does animation have added value in the restorative effect of natural stimuli?
So far I’ve tested the stimuli in sets containing 6 or 12 slides. The sound didn’t have a direct relation to the images. In this experiment I want to take the stimulus a step further.
I’ve been working on a program to produce randomised computer generated landscapes which consists of hills with shrubs and water. On top of that different animated elements are projected: clouds, flocks of birds, bees, butterflies, blossom leaves and waves on the water.
All the elements move at their own speed and behave in an appropriate manner. By pressing certain keys I can make the elements appear and disappear from the screen. That way I constructed a scenario which I recorded on video. The stimulus isn’t responding to the heart-rate yet because I want to gain insight into the effects of animation. This way I’m sure the whole group gets the same input. Sound artist Julien Mier continued to work on the sounds and made a score to match the images and direction of movement on the screen.

Design

Due to lack of participants I had to reduce my conditions from 4 to 2, focussing on my own animation instead of also testing photo realistic versions. I worked with two groups: one group viewed the full video with accompanying sound. The other group got the full soundtrack but viewed only a still from the animation. That way I can test for the possible added effects of the animation element.

Subjective tension

The variables to be tested (the dependant variables) are:
Subjective feeling of tenseness. Participants score on the statement: “I feel tense.” This is measured on a 7 point likert scale going from not at all to the most tense ever. Beats per minute, inter beat interval (calculated from BPM), heart-coherence, heart-rate variability and Galvanic skin response. To measure the latter I used a separate device, the Mindtuner, which Malcolm from Heartlive kindly lend me. Two electrodes are placed around two fingers. A drawback is that the data is output in a separate file so I will have to do some data cleaning later to match the data with the events. But it will be nice to see how the skin conductivity behaves as this is a good indicator of stress.

Timeline

The experiment starts with the measurement for subjective tenseness. This is followed by a 5 minute baseline measurement where people are asked to relax while looking at a black screen. After reading instructions participants engage in a cognitive stress task. They have to do subtractions within a limited time span. The more correct answers they give the shorter the time they have to do the calculations. There are 27 calculations in the task. Depending on the speed of the participants this task takes around 2 minutes. They then have to fill in the subjective tenseness questionnaire again. Then they watch either the five minute animation with sound or the still with sound. The experiment finishes after they have filled in the tension questionnaire.

labAtBKKC

The lab is located in a separate room at the BKKC office. Participants are seated at a table at 200 cm from a TV screen. The image shown is 84,5 x 61,5 cm. The sound was play using an active noise cancellation headphone (Bose Quiet Comfort 25). We choose these headphones because the building is located close to railway and a lot of office noises penetrate into the lab.

Many thanks go to BKKC for their support with the promotion and organisation of the experiment. Special thanks go to Hans and Laetitia. Without their help this experiment would have been impossible.