Working on the sitting sensor

The most important step Danielle made this week is the formulation of the costumer journey. We distinguish four different kinds of users: The Plug-and-float, De-kleine-onderzoeker, the Lab manager and the QS-wizard. The Plug-and-float is the individual user who wants to improve the quality of her own meditation session. She focuses on looking at the data and improving the meditation through actuation. De-kleine-onderzoeker wants to do research about the environment. She wants to know which actuation has the most positive effect. She organizes experiments for herself or for a bigger group. The Lab manager maintains the suits for a bigger group. She is able to work on the sensors and the actuation by adding or removing sensors or actuators. The QS-wizard wants to make new applications by herself. For every kind of user, Danielle described the way how they have to use the software. This costumer journey is the starting point for the software.

On Monday Danielle went to ProtoSpace in Utrecht to meet the software engineer and the system architect to discuss the data server. We learned that the micro controller has to be programmed in a more modular way to make it future proof. This makes it more complex than we thought first.

Today we worked on the sitting sensor. The data we got from the old one where to unstable. The sitting sensor is the on/off button for the system. As soon as you are sitting it will be logging your session. But that also means that the whole session is interrupted as soon as the sitting sensor does not work. We knew that the surface of the sitting sensor has to be bigger so that it is no problem if you move a little. But the one we had was too big, so it was pretty expensive and not comfortable.

sitting sensor - conductive foil 15x7 cm

sitting sensor – conductive foil 15×7 cm

As you can see in our notes below the conductive foil was 15×7 cm first. Before sitting the
value was about 880 or 860. While sitting it was about 140. That is a big range which makes it possible that you can move while meditating without interrupting the session. We cut the conductive foil in half to test if it would work if it was smaller. You see that the range of the value became much smaller and the sensor was actually too unstable again. This might have also been caused by lack of conductivity. We cut off the tape as you can see below.

sitting sensor - conductive foil

sitting sensor – conductive foil 15×3,5 cm

notes sitting sensor - conductive foil

notes sitting sensor – conductive foil

But we thought maybe the conductive cloth we have conducts better than the conductive foil, so that the small one of 15×3,5 cm would be enough to get a bigger range. We just tried and as you can see our experiment was successful. With the conductive cloth from 15×3,5 cm we got the best values with the biggest range ever. For now this one is our choice. Next week we have to work on how you can include it in the suit.

conductive cloth

sitting sensor – conductive cloth 15×3,5 cm

notes sitting sensor - conductive cloth

notes sitting sensor – conductive cloth

I see how difficult it is to be the team leader. Danielle has the vision.
She wants to reach her goal, but sees how ambitious it is. It seems very difficult to me to stay true to your own vision if there are still organizational problems you have to solve. We try to formulate a common vision so that every team member knows our plans. This vision has to be the base everyone is familiar with so that every team member goes for it. But I am optimistic: Together we will get there!

Bewaren

Bewaren

Bewaren

Bewaren

Bewaren

Virtual View: statistics for experiment 3

In experiment three I wanted to see if adding movement to visual content had a bigger lowering effect on heart-rate and subjective stress then just using a still. And I wanted to know if variables like heart-rate and skin conductance could be restored to or below the baseline following a stress stimulus. Sound accompanied the visuals and I used the same soundtrack for both conditions.
The animation consisted of a main landscape layout with different animated elements over-laying that scene. The landscape consisted of a blue sky with white clouds slowly moving over it. Three hills with shrubs in different shades of green and a blue water body with a cream coloured shore. The animations were started mostly in sequence so there were just one or two animated elements to be seen. This is aside from the clouds and the waves on the water body, they were visible most of the time.staticAnimationStimulusAnimation still used in condition 2

Other animations are: big and small flocks of “birds” consisting of 150 and 5 “birds” respectively. They move in random directions within the frame. Blossom leaves flying from one side of the screen to the other. This animation also included a bee flying from one side of the screen in a slow, searching way. A final animation element are the butterflies. They flutter near the bottom of the centre of the screen and disappear after a random time span. The visuals are not realistic but simplified and based on the style of the old Japanese woodblock prints.
The sounds are inspired by nature but underwent a lot of computer manipulation. The sound is carefully synced with the imagery and movements on the screen.
In both conditions I measured subjective tension (7 point likert scale), heartbeats per minute, heart-coherence and skin conductivity. The experiment consisted of three stages: a baseline measurement (5 minutes), a cognitive stress task (around two minutes), the audio and visual stimulus part (5 minutes). Subjective tension was measured before the baseline measurement, after the stress task and after the stimulus. For a full description of the lab setup and experiment view the previous post.

Sample
The sample consisted of a total of 33 participants, more women then men (75% over 25%), this frequency was the same for both conditions. They were mainly recruited from the art centre where the experiment took place, there were a couple of students and some members from the general public. They were randomly assigned to each condition. The maximum age was 71, the minimum was 20 (mean 41,1). One dataset was corrupt so I ended up with 16 (mean age 39,6) participants in condition 1 (animated landscape) and 16 (mean age 42,7) in condition 2 (landscape still).

Correlations
I’ve used SPSS 20 to calculate the statistics. I was curious if the heart-rate or heart-coherence would correlate with the subjective tension and/or the skin conductance. I could find very few significant correlations between the different variables. There are only significant connections between the different measurements of one variable. So the beats per minute (BPM) of the baseline measurement correlates with that of the cognitive stress task measurement and of the stimulus (landscape) measurement. The same is true for the Gavanic skin response (GSR) and the heart-coherence (HC). The only interesting correlation I found was a negative correlation between the baseline HC and the self reported tension (SRT) of the baseline and the stimulus. The could indicate that, assuming that heart-coherence is a measure of alert relaxation, perceived tension at the start and during the task the opposite of this alert relaxation state. But the correlation is weak (-496 and -501) so not much can be concluded from that.

Condition comparison
Before comparing conditions (with or without motion) I had to check if the stress stimulus had worked and if there was an effect for the audiovisual stimulus in general. Below you see an overview of the variables self reported tension (SRT), beats per minute (BPM), heart-coherence (HC) and galvanic skin response (GSR). The values for these variables are the mean values for the duration of the different parts of the experiment: Baseline (t1), cognitive stress task (t2) and stimulus (audiovisual material, both conditions) (t3). You can also see the expected direction of the variables. The significant values are printed in green.
results
From the table you can tell that there is a significant difference between the baseline measurement and the cognitive stress task on the one hand and between the stress task and the stimulus. This is true for BPM, GSR and self reported tension. All values rose during the stress task and decreased during the stimulus presentation. As those measures are strong indicators for stress this indicates that the stress task worked and the tension showed significant variation during the experiment. Heart-coherence shows no significant changes.
For the heart-rate there was even a significant lowering of the mean compared to the baseline. Indicating that the BPM was even lower the when participants entered the experiment.

Of course I wanted to test if there was a difference in the variables between conditions, that way I could see if animation was more effective then using only a static image. As you can see from the table there were no significant results for either of the conditions apart from the skin conductivity (GSR). The skin conductivity is a measure for arousal, the more aroused the higher the value. I would expect the GSR to be low at the start, high during the stress task and again low during the stimulus presentation. The GSR values for the stimulus presentation were significantly lower then during the stress task but they were still significantly higher then during the baseline measurement. This indicates that the GSR levels haven’t gone back to the baseline let alone become lower then the baseline state. This might be due to the fact that it takes more time for the skin activity to go back to normal. The response is slower than for heart-rate measurements.
We can see a reduction in heart-rate for both conditions with a bigger reduction in heart-rate for the animation condition. But neither of these changes are significant.
For the self reported tension we see a significant lowering in the tension from the higher values during the stress task and stimulus presentation. This means that people felt significantly less tense watching the landscape than during the stress task. The perceived tension was also lower in the animation condition than during at the start of the experiment though not significantly so. We don’t see this effect in the static condition. For this condition the baseline was lower and the effect of the stress stimulus was stronger. The overall variation was bigger. So you can’t really draw any definitive conclusions from this data other then that the landscapes reduced arousal in both conditions.

Overall lack of significance of many of the variables in either conditions may be caused by small the sample or it may indicate that there isn’t enough difference between the conditions for it to be significant. This might be caused by the way the stimuli were presented. For the sound we used a high quality active noise cancellation headphone. The impact of the sound was big. The screen image on the other hand was rather small (84,5 x 61,5 cm). The effect of the visuals might therefore be less strong in comparison with the high impact of the sounds.

I was of course also interested in the overall differences between the conditions, especially for the landscape stimulus. When comparing the different measurement moments for BPM we can see that in every moment the heart-rate in the static image condition is lower. So the participant in the first condition already started out with a much higher heart-rate. During the stress task the difference is even bigger and during the landscape presentation the differences have become smaller. I had expected that the heart-rate in the first condition would be lower but the differences are so big to begin with that you can’t draw any conclusions from it.

So does animation have a more positive effect on heart-rate, heart-coherence, skin conductance and self reported tension? I’ve looked at the interaction between all these variables and animation but on non of the variables the effect is significant. The major effects are on heart-rate. A bit to my surprise there are absolutely no effects on heart-coherence. In the first condition we see even a (non-significant) lowering of coherence during the animation. I’m therefore not going to use this value to drive my animation as was my original intention.

Scene comparison
While analysing I got curious to see if there are differences between the scenes of the animation and sound in condition 1 and 2. The animation and accompanying sounds can be divided into 10 different scenes. During the construction of the video I tried to incorporate various animation elements. They become visible one after the other.
I looked at the effects on mean heart-rate because it showed the most results. I wrote a script to calculate the mean heart-rate for every scene and for both conditions. The results are show in the graph below.
scenesCompare

The variations between the scenes were not significant for the sound with still condition but they were at two points for the animated condition. You can view stills of the scenes below. There was a significant reduction in heart-rate of 4,8 between scenes 1 (mean 76,6) and 2 (mean 71,8). And a significant reduction in heart-rate between scenes 1 and 9 (mean 71, 5) of 5,1. This could suggest that more is happening to the participants in the animation condition and that animation has more potential for influencing the heart-rate of users.

allscenes
Stills from the 10 different scenes

Virtual View: results experiment 2

The analysis of the second experiment has taken a long time. At first there appeared to be no significant results on any of the variables, except on the heart-rate during the cognitive stress task. So I consulted different people with a degree and research experience and asked for help. I’ve really learned a lot from them. They all have different approaches and ways of working so I’ve picked out all the good tips and insights. My thanks go to Sarah, Malcolm and Marie. The latter is a researcher at Tilburg University, her knowledge of statistics dazzled me. She is the one who recommended a different analyses which has yielded more significant results.

Research questions

My main questions for this experiment were: Which type of stimulus results in the most stress reduction and relaxation? And which stimulus produces the highest heart-coherence? As I want the values on that variable do drive my landscape animation.
To test these questions I used the following dependant variables: BPM, heart coherence,
later I introduced heart-rate variability calculated from inter-beat interval, self reported stress and self reported relaxation. These were measured during the baseline measurement, the cognitive stress tasks and the stimulus sets.
The independent variables are: stimulus set 1 with 12 landscape photographs and synthetic nature sounds, stimulus set 2 with more abstract landscapes styled by me with the synthetic natures sounds in the background. Stimulus set 3 consisted of 12 photographs of kitchen utensils and a soundtrack of someone preparing a salad. The expected direction of the variables will be explained below.

Analyses

After struggling for some time with the non-significance of the variables in the different sets I discovered that the randomisation of the sets hadn’t been ideal. There were 33 participants who viewed the sets in 6 different orders. On top of that the group size per order was different. Some groups were only 4 participants, others 10. This is something I’ll have to take into account in my next experiment.
I used a repeated measures analyses. On my first, non-significant results, I used my baseline measurement as a covariate. Marie said that wasn’t the way to go. So I used just repeated measures where the baseline is just the first measurement, no covariates. And I did post hoc analysis (Bonferroni) to see the differences between the set results.
This is an overview of the results:
Results overview
Sarah made this clear lay-out of the research results compared to the expected results.
As you can see from the blue results the subjective stress measurements are significant compared to the baseline for all three stress tasks. For the first stress task (note this task isn’t connected to set 1, it is just the first task after the baseline measurement) the difference in heart-rate is significant during the stress task. There is also significant difference in HR during the landscape set. The kitchen utensil set heart-rate is also significant. Even though the heart-coherence is has the right direction non of the changes are significant. There are also no significant differences between the subjective relaxation questionnaires.

On Sarah’s recommendation I also looked at correlation between all the variables. That is very interesting as it reveals relationships between the variables. As the subjective relaxation questionnaire didn’t show any significant results I was curious to see how it correlates with the stress questionnaire. There should be a significant negative correlation between the two. And there is, it is especially strong for the baseline stress measurement and all relaxation measurements. On the other hand there was no correlation between subjective relaxation and heart-rate, a lowering of this value may be considered an indication of relaxation. All in all the relaxation questionnaire doesn’t give convincing results. There was a very strong correlation between heart-rate and heart-rate variability. In fact too strong, as Sarah pointed out they measure the same thing so it has no use including this variable in the results.

First set

As the stress stimulus was strongest the first time (view below) Marie advised me to do an analyses on the first set that was shown after the first stress task, independent of what kind of stimulus set it was. This was the distribution: set 1:    shown 11 times; set 2: 14; set 3: 8. The results from this analyses completely matched the other results. Heart-rate and heart-rate variability are significant (this is of course an average of all three sets shown), heart coherence and self reported relaxation were not. There was no interaction effect between the set shown and either the heart-rate or heart-rate variability which suggest that the order has no effects on the results.

Graphs

Stimulus overview

I made some manual graphs to see the effects of the stimuli on heart-rate and heart coherence side by side. There is no significant difference between the pictures in the three sets. For me it is still nice to see the difference between the pictures. The graph is done manually in Photoshop.

Stimuli used

HRV

After getting advice from Malcolm  about the results he suggested I calculate heart-rate variability from the inter-beat interval values that I’d logged. Heart-rate variability is known to correlate well with stress (negative) and relaxation (positive). So that’s valuable information to add to the results.
I wrote a script in Processing to calculate and visualise the HRV for the whole experiment divided in a 5 second windows. The white line is the baseline measurement, the red lines are the stress inductions and the green lines are the audiovisual stimuli. You can tell from the image that the stress induction has some effect.

HRV results
Looking at my correlation table however there is only a significant negative correlation between the baseline subjective stress measurement and the HRV. Neither the other stress measurements nor the subjective relaxation measurements show any correlation.
It is hard to tell from the image but the photo realistic landscape set has a significant difference from the baseline measurement. The third set is almost significant (p = .054).

Conclusions

The first conclusion should be that the differences between the stimuli in the sets are small. There are significant* differences in average heart-rate between the sets (68,26 (baseline); 66.46* (set 1); 66.75 (set 2); 66.32* (set 3)). But the differences are really small. There is reduction in all the sets. Set 3, the kitchen utensils has the lowest average.  The set isn’t very stimulating which might explain the low heart-rate. This conclusion is also backed by the fact that the results from using only the first set shown are comparable to when working with the individual sets.
Heart coherence, which I want to use for driving the animation and to trigger the interaction with the installation showed that the styled landscapes with sounds had the highest heart coherence average but the results were not significant.  It does not seem a good measure for pure relaxation. Heart coherence is a difficult term but this description gives a good indication of the different aspects of this state: In summary, psychophysiological coherence is a distinctive mode of function driven by sustained, modulated positive emotions. At the psychological level, the term “coherence” is used to denote the high degree of order, harmony, and stability in mental and emotional processes that is experienced during this mode. (From The Coherent Heart p. 12, Mccraty, Rollin Ph et all, Institute of HeartMath). On page 17 of that document it states that: “In healthy individuals as heart rate increases, HRV decreases, and vice versa.” As HRV and coherence are closely linked the same is true for heart coherence. Even though heart coherence is much broader than relaxation is also encompasses activation of the parasympathic nervous system which is also a marker for relaxation. Important in heart coherence is the inclusion of positive emotions. This is what I try to evoke by using landscapes based on generally preferred landscapes.
The Virtual View installation should provide a relaxing distraction for people in care environments. Cognitive states that relate to this goal are soft fascination and a sense of being away as introduced in the Attention restoration theory (ART) by Kaplan and Kaplan. I’m guessing now that heart coherence might correlate with those cognitive states. This is something I will explore in the next experiment.

The stress task was perceived as stressful judging from the subjective reports. These findings are partly backed by the physiological data. Only the heart-rate of the first stress task differs significantly from the baseline. Our goal by introducing a stress task was to create bigger differences in heart-rate. For that to be successful the stress task should really produce stress. Although people reported feeling stressed we can’t measure it three times in a row. So for the next experiment I’ll work with 3 groups who will all get only one stress stimulus and one landscape stimulus.

All in all this experiment doesn’t prove that my styled landscapes with synthetic nature sounds create the most relaxation and heart-coherence but the results neither prove that they don’t. So for the next experiment I’ll continue with the styled landscapes and introduce animation.

Virtual View: conducting experiment two

Our ideal for the execution of the second experiment was to have 60 participants of 40 years and older. There would be two labs where the experiment would be held in alternating rooms over 3 days. The rooms would be in a quite part of the school as we had quite a lot of disturbance during the first experiment.

The first setback was the location. It wasn’t possible to have two classrooms for three days at the same time. There weren’t any rooms available in a quiet part of the school. Eventually there was no other choice then to use a room in the middle of the busy documentation centre and spread the experiments out over 5 days. The room was a kind of aquarium, it was very light and you could see people walking around through the glass walls. During the test there was disturbance from talking and students opening the lab door by mistake. So far from ideal.

But my main disappoint was with the sample. Only one day before the start of the experiment the students notified me that they had managed to only get 20 participants instead of the 60 we had agreed upon. We were mostly depending on the teachers for participation but it was the period of the preliminaries and they were very busy. Also the trial would now take 40 minutes instead of the 20 to 30 minutes the first experiment took. Had I known earlier I could have taken steps and come up with a suitable solution.
As it was I had to improvise. I had to let go of the control group and had to broaden the age range. In the end 6 students of below 30 years old took part. I asked around in my own network and managed to recruit 10 people in the right age group. In the end we tested 40 people, all of whom were exposed to the stress stimulus.

Unfortunately not all the results were valid and useful. Some data was lost due to technical problems. Also quite a number of people made mistakes with filling in the questionnaires. We now had two questionnaires, one for self reported stress and one for self reported relaxation. The stress questionnaire contained one question in the positive direction (I feel everything is under control) and two negative items (I feel irritated, I feel tense and nervous). Both had to be reported on a 10 point scale.stressQuestionnaireApparently this was confusing for some people and even thought notes were taken it wasn’t always possible to reconstruct the correct answer. In the next experiment will put also some text below the numbers to indicate the value.
There were also two very extreme results (outliers), they can’t be included in the data set as they would mess up the averages too much. So I ended up will 33 data sets I could use for my analyses.

But first the data had to be sorted and structured. It took me quite some time to streamline the copious EventIDE output into a useful SPSS dataset.

The baseline measurement included self reported stress (pink), heart-rate (orange) and heart-coherence (red) and self reported relaxation (green). baselineOutput
The three answers from all the questionnaires had to be combined into one value and checked for internal validity in SPSS.

It’s nice to take a look at a part of the results from the cognitive stress task:
cognitiveTask
From the output you can see exactly what the sums were, how much time it took to make them, what the answer was and if the given answer was correct or not. I didn’t use this data but it would be nice to see if for example participants with more faults have higher heart-rates. Heart-rate (orange) and heart-coherence (red) are again below the results.

Before each stimulus set there was the stress questionnaire and after each set the relaxation questionnaire. The output for each set, which consisted of 12 pictures with sound is laid out as follows:
Picture count | set number | image id | image name | inter beat interval | BPM | heart-coherence
setOutput
Each picture was shown for 20 seconds and the heart data was logged around four times per second. The output for one picture looks like this: 60.6|60.5|60.4|60.9|61.2|61.5|61.7|61.8|61.9|61.9|61.9|61.9|61.9|61.9|61.8|62.6|63.1|63.5|63.7|63.5|63.3|63.2|63.2|63.1|63.7|63.8|63.9|63.4|63.1|62.9|62.7|63.1|63.5|63.6|63.6|63.7|63.7|63.8|63.8|63.8|63.4|63.2|62.9|62.8|62.9|62.9|62.9|62.9|62.6|62.2|62.1|61.8|61.5|61.3|61.2|61.1|61.1|61.0|60.9|60.8|61.1|61.3|61.4|61.5|61.6|61.6|61.7|61.7|61.7|61.5|61.3|61.3|61.2|61.2|61.0|60.9|60.8|61.0|61.1|61.2|61.2|61.2|61.3|61.3|61.4|61.4|61.0|
This yields an average of 62.1 which is the output I used. But it is good to have all this data for each individual image. All the image averages had to be combined in a set average so I could easily analyse the differences between all three sets. I’m still analysing the data. More on that in my next post.

Virtual View: design of experiment two

After conducting and analysing the first experiment some points of improvement emerged.

  • The differences in heart-rate between the sets weren’t significant so we want to create more extremes in heart-rate.
  • One group will get a heart-rate enhancing trigger and there will be a control group that won’t.
  • There was evidence of interaction with age for some of the variables so we want a more homogeneous age group to work with.
  • The experiment should be simplified, less sets and keep sounds the same for the landscapes sets. The duration of each stimulus was rather short so we want to try to double the amount of pictures in each set.
  • The control set should be neutral instead of negative.

It was clear that we want to introduce stress into the experiment. The target group are patients who visit hospitals. They are under stress a lot of the time. So we have to create a stress stimulus. Together with the students and a teacher from Avans Hogeschool we looked into some of the known possibilities for inducing stress. Our idea was to simulate a hospital through minor medical treatments. But we realized this will probably not work with our sample. They will be teachers with a background in nursing so taking blood pressure won’t upset them. I also discussed some options with Malcolm and Sarah. Showing parts of horror movies we considered to be too subjective. The best option is physical stress in the form of electro shock or ice-water. But this is out of our league. We don’t have the knowledge or experience to conduct an experiment like that.

Finally I settled for cognitive stress task in the form of calculations. As the stress task had to be repeated we needed a stimulus that would remain a challenge and induce some stress. Cognitive tasks have that ability. To keep it challenging there should be different levels to also keep it interesting for people whore a good at doing calculations. I made a little design.

cognitive Task Design

I had no idea how this design could be implemented in EventIDE. So I send my sketch to Ilia who programmed a nice interface. On the one hand the subtractions would get more difficult if your answers are correct. On top of that the allotted time will decrease if there are three correct answers in a row.

cognitive Task
The design is a 2×3 factorial with repeated measures. There will be three landscapes/objects with sounds and each of them will be experienced either with or without a stress stimulus preceding them.

Factor design

The flow of the experiment is as follows:
Design Experiment 2
Depending on how long it would take for the participants to complete the questionnaires the duration of the entire experiment will be around 30 minutes.

For me it was kind of hard to include the different questionnaires. We wanted to check experienced relaxation as we did in the first experiment. But we also wanted to know how much stress participants had experienced during the stress task. As these are opposite experiences I found it hard to find a place for both in the flow of the experiment. I finally settled for checking for self-reported stress right after the cognitive task and reporting relaxation after the landscape stimuli.

I would have loved to measure physiological stress data. Apart from heart-rate and heart-coherence there was no objective data. I discussed it with Malcolm. He kindly offered to lend me some of his equipment but we realized we just didn’t have enough time to implement it properly. So for now I just have to do with the heart-rate and self-reporting.

The dependent variables in this experiment are:
Heart-rate (beats per minute & inter beat interval)
Heart-coherence
Self-reported stress
Self-reported relaxation
The independent variables are:
Photo realistic landscapes & synthetic nature sounds
Styled landscapes & synthetic nature sounds
Kitchen utensils & kitchen sounds
Age and gender

The sample will consists of 30 + 30 participants older then 40 years without heart problems or heart medication.

Finally I could work on my own creations. This was the time for me to test some of my first sketches of the Virtual View landscapes. They are a combination of computer graphics made in Photoshop and computer generated images made in Processing. I combined them into bitmaps. As we wanted participants to be exposed longer to the stimuli we doubled the amount of pictures in each set from 6 to 12. The inspiration for the landscapes came from our literature study and the results of the first experiment. As an artist I wanted to see what I can leave out and still have a relaxing effect. I also experimented with different techniques to create the image elements.
styled landscape
The photo realistic images were chosen to resemble the styled image and have the same simple layout. The idea was to see it there would be a difference in relaxation and stress reduction effect between the computer graphics and the photographs.

Photo realistic landscape

Our initial idea for the neutral images was again to use interiors. We thought of general school areas. But as we were approaching the end of the year the teachers would be pretty highly strung and seeing pictures of the school might not be neutral for some. So we decided to use kitchen utensils. For the sound we used a recording of someone preparing salad.
The sounds to accompany the landscapes were produced and composed by Julien Mier.  For us this was also the first sketch of what Virtual View could sound like. Julien made some nice synthetic birds and bees. We worked towards a piece that was a mix of background noise, silence and unexpected animal noises. The sound was timed to the transitions between the images in the experiment. So every 20 seconds a new piece of sound was started with different accents. We used the same soundtrack for both landscape sets.

Virtual View: results experiment one

In this post I want to give an overview of the results of the first and I will spare you the heavy statistic speak. So don’t expect a scientific article. The data is there and I may write a proper article one day but it isn’t appropriate for this blog.

Together with Hein from the Open University I looked at the data from the first experiment. This is an exploratory experiment so we’re looking for trends and directions to take with us to the next step.
The students did a splendid job organizing the dataset. For each participant there was basic demographic data (gender and age) means and combined means for the perceived relaxation questions, the separate images and images combined in sets. For each set there are means for: beats per minute (BPM), the inter beat interval (IBI) and heart-coherence.
To make sure our self constructed questionnaire was valid I did a scale reliability test. All the sets had good reliability for all 5 questionnaires. This just means that there is an internal consistency between the questions. The questionnaire it self isn’t validated for measuring relaxation. We just asked the three questions.

We did 4 analyses on the four variables: perceived relaxation (measured with the questionnaires), BPM, IBI and heart-coherence.
The stimuli sets were {sound}:
1. Preferred landscape with water element {running water @ 48 Db}
2. Preferred landscape in autumn {repetitive bird calls @ 47 Db}
3. Preferred landscape as abstract painting {melodious birdsong @ 56 Db}
4. Neutral hospital interiors {neutral hospital sounds @ 48 Db}
5. Landscape with deflecting views {running water and melodious birdsong @ 43 Db}

Self-reported relaxation

self-reported relaxation

self-reported relaxation, sets 1 to 5. Green is females, blue is males

The three questions we asked after the baseline measurement and after every stimulus set were: I feel at ease, I feel relaxed, I feel joyful and happy. Reported on a scale of 1 to 10. The three questions were merged into a relaxation scale. The hypotheses was that the overall relaxation scale would be lower for the hospital interior set (d) than for all of the landscape sets.
There was a significant effect for relaxation. As you can see from the graph set number four (hospital interiors) shows a distinct decrease of the sense of relaxation. Although the abstract paintings also score lower, this trend is mainly caused by the dip in relaxation scores on the hospital set, this confirms our hypotheses.

There was also something going on with the interaction between age and relaxation. To gain more insight into what’s happening with the age effect I looked at the data and noticed there are two clear groups: 25 years old and younger and above 39 years. The groups are about the same size (young 15, older 18). There were no participants of the age between 25 and 39 years. To test for the significance of the relaxation for the two groups I ran a test that showed that for the young participants the relaxation effect isn’t significant but for the older participants it is.

relaxation divided by age group

relaxation divided by age group. Blue is older.

Heart-rate
For the heart-rate we used two measures based on the same data: beats per minute (BPM) and inter beat interval (IBI). So it doesn’t make a difference which data analyses I discus here. The hypotheses was that the BPM would be higher for the hospital interior set (d) than for all of the landscape sets.
There we no significant differences between the sets. Our hypotheses has to be rejected.

heart-rate for men and women

heart-rate for men (blue) and women (green)

But there is again something going on with age, this time in relation to heart-rate. Looking at the graph below it is clear that the heart-rate in reaction to the landscapes and sounds is at odds for set two and set four. The older and younger people react quite differently.

Beats per minute for two age groups

Beats per minute for two age groups. Younger is blue.

Heart coherence
The hypotheses for heart coherence was that the coherence level would be lower for the hospital interior set (4) than for all of the landscape sets.

Heart-coherence for men and women

Heart-coherence for men (blue) and women

There is a significant trend for the age coherence interaction. Looking at the graph we can see that the coherence for the women is almost the same over the 5 sets but higher then the baseline coherence measurement. The men show a much more varied response and on average a lot lower then the baseline measurement. It is interesting to note that the abstract painting set, number 3 has a very high score for the men.
Looking a bit deeper into this trend there is again a relation to age. For the younger participants there was no significant difference between the sexes where heart-coherence is concerned. The graph of the older participants shows a significant difference between men and women. The older men cause the interaction-effect between gender and heart-coherence.

Difference in heart-coherence between older men and women

Difference in heart-coherence between older men (blue) and women

So although the average heart-coherence for the hospital interior set (4) is at the lower end for both men and women the effect isn’t convincing in view of the other scores of the other sets. The results don’t support the hypotheses.

Conclusions
For an exploratory first experiment the analysis has yielded some interesting results. The main hypotheses that the self-reported relaxation, heart-coherence, BPM would be lower  for the hospital interior set (4) than for all of the landscape sets is partly supported.
The self-reported relaxation and the heart-coherence showed significant results.

The lack of significance for heart-rate may be due to the small group or may suggest that the differences between the sets wasn’t big enough. To influence this I want to reduce the amount of sets in the next experiment and introduce a stress stimulus to create more contrast between the states of the participants.
Judging from the analyses it is clear to me that for next experiment the age should be more homogeneous.
For me the most surprising and promising was the high heart-coherence of the men on the abstract paintings. People were skeptical about using these abstract stimuli as there is not much support in literature that non-realistic images have any effect on viewers. Of course this will require more research but it is an interesting and unexpected result.

Virtual View: conducting the first experiment

Now that the research goal is clear, the stimuli are collected and the methods are clear and integrated in the EventIDE experiment it was time to look for participants. We needed at least 30 participants equally divided between men and women. Avans Hogeschool  has thousands of students and staff so we didn’t expect that to be a problem. The students wrote an inviting message on a digital notice board asking people to participate but only got two reactions. Enter the next strategy: walking up to anyone they met and just ask them to take part. That worked a lot better and most of the participants were recruited in this way. Some of the classmates were invited through text messages as well. In the end 33 participants took part, a mixture of students and staff.

Photo by Carlos Ramos Rodriguez

The students arranged the lab set-up and together we determined the protocol. The lab was a small classroom with a smart board with speakers. The students cleared most of the room, leaving it clutter free. The table was installed at a distance of 250 cm from the smart board. The projection was 154 x 108 cm. For the record I checked the sound levels of the different sets in the lab set-up with my decibel meter. They might have a strong influence so it is good to know at what average levels the sounds were played.

The sound level during the baseline measurement (no sounds were played) was 33 decibel. The autumn set with repetitive bird sounds 47 decibel, deflecting vistas with birds and running water sounds 43 decibel, hospital interiors with hospital waiting room sounds 48 decibel, standard preferred landscape with running water sounds 48 decibel and abstract landscape paintings with melodious bird songs 56 decibel.

Sketchup made by students Avans

The students lead the experiment, I came for the first couple of trials to taste the atmosphere and give some tips. At arrival people were welcomed and asked to turn off their phones. We also asked if they’d been to the bathroom. Because we use quite a lot of running water sounds and the experiment lasts around 20 minutes this might become an issue for people. We didn’t want them to get distracted because they needed to go to the bathroom and couldn’t. The sensor was placed on the earlobe. Participants were explained the course of the experiment and told that all data was anonymous and that they could leave at any time should they feel the need to end the experiment.

Participant id, age and gender were entered by the experiment leaders and then the participants were left alone with the stimuli and the questions.

As soon as the experiment was over the leaders would enter the lab for removal of the sensor and debriefing. Most participants were enthusiastic about the experiment and agreed to take part in the next experiment.

The next step is analysing the data, I can’t wait for the results!

 

Virtual View: research methods

How does one research the influence of landscape and sound on a human? Fortunately a lot of research has gone into finding out how people react to visual landscape stimuli. Most articles I’ve read made use of static pictures, some used video. As pictures can be found in abundance on the web and are easily stored and manipulated I chose static colour pictures as the main visual stimulus.

In most experiments natural landscapes are compared to urban environments with varying amounts of green. Almost always the natural and greener urban scenes have more positive effects on health and affect related variables compared to the urban environments. So it seemed logical not to use the pictures of urban environments. Together with the students I decided on using landscape pictures that were at odds with the most preferred landscape. So that would be: chaotic natural scenes with a restricted view and no deflected vistas or water. As I discussed my experiment setup with Sarah she strongly recommended I’d use a control set of stimuli. That way I could (hopefully) confirm the findings from other experiments and I’d have a contrast set to compare the natural scenes to and hopefully see significant differences between the contrast set and the different landscapes. As the installation will be placed in health care environments I decided to make a set of neutral hospital interiors as a contrast set.

stimuli

The final installation will be an animation so I wanted to use sets of landscapes to mimic a little the animation effect. We decided on sets of 6 images. Then we had to figure out what time the images would need to be shown to have a measurable effect. Not very much could be found in literature about this so the students did some tests, showing the images for different time periods. The result on the heart-rate was very diverse. So I consulted Malcolm and asked him what to make of this. He said the sample was too small to conclude anything. His suggestion was to show people two sets with the images displaying at different lengths and then to ask people what they preferred. He had already pointed out earlier that it does take some time for stimuli to take effect. Unfortunately the students only tested 10 and 25 seconds to compare. From that they concluded that 25 seconds was a bit too long but that people preferred the longer exposure. So we settled for 20 seconds per image. And each set would last two minutes.

Of course a baseline measurement was needed for the heart-rate as well as the self reported data (view below). For the experiment to have any scientific value Malcolm said I needed at least five minutes of baseline measurement. Not to complicate things further Hein advised to not make use of any specific stimulus but just use an empty screen. It would be quite a long time to sit there and do and see nothing, but it would be for the good cause!

As I reported earlier the research on the effects of natural sounds has been a lot more sparse. But as with visual landscapes water was perceived as more pleasant compared to for example mechanical sounds. And aesthetically pleasing an non-threating bird sounds seem to indicate a positive effect on attention restoration and stress reduction. So we used different combinations of water and bird sounds. The hospital interior set was accompanied by sounds from a hospital waiting room.

In this review of health effects of viewing landscapes there’s an extensive list of research and physiological parameters measured. For Virtual View I’m interested in heart-rate and heart-coherence. Further more I would like to know how a certain landscape makes people feel. I want the installation to have a relaxing effect and to positively influence a sense of well-being. For measuring the physiological side I of course use the Heartlive sensor. I measures beats per minute and calculates heart-cohere. The EventIDE software logs the heart data every second and calculates means for every picture.

Not only do I not own a device to measure for example skin conductivity (GSR) I’m also curious about how people feel when watching the sets. So I needed some record of perceived relaxation state and affect. It was not easy to find a (short) questionnaire which measures that. Malcolm pointed me to the Smith Relaxation States Inventory 3 (SRSI3). It is a very interesting and validated inventory but alas consists of 38 items. It doesn’t make sense to ask people 38 questions after two minutes of pictures. The questionnaire may not be modified without consent so I asked Sarah what to do. She suggested to simplify things and just ask people how relaxed they are on a 10 point scale.

She said 10 points are better then five because it is easier to see the middle and it is more fine grained. It gives people the opportunity to pinpoint how they feel. We settled on three questions: I feel at ease, I feel relaxed, I feel joyful and happy. If my installation can make that happen I’m satisfied no matter what the heart-cohere or heart-rate is. All questions are integrated in EventIDE. Carlos, one of the students, made a nice colour feedback on the scale.

The students take notes of remarks the participants make on their experience of the trail. This may also yield interesting results in relation to the experiment data.

Virtual View: building an experiment

I was very lucky to meet Ilia from Okazolab. When I told him about Virtual View and the research I was planning to do he offered me a licence to work with EventIDE. This is a state of the art stimulus creation software package for building (psychological) experiments with all kinds of stimuli. Ilia has build this software which was, at the time I met him, still under development. Besides letting me use the software he offered to build an extension to work with the Heartlive sensor. He’s been very supportive in helping me to build my first experiment in EventIDE.

It is a very powerful program so it does take a while to get the hang of it. The main concept is the use of Events (a bit similar to slides in a PowerPoint presentation) and the flow between these events. Each event can have a duration assigned to it. On the events you can place all kinds of Elements ranging from bitmap renderers to audio players and port listeners. Different parts of the Event time line can have snippets of code attached to it. The program is written in .NET and you can do your coding in .NET and also use XAML to create a gui screen and bind items like buttons or sliders to variables which you can store.

You can quickly import all the stimuli you want to use and manage or update them in the library. From the library you drag an item onto a renderer Element so it can be displayed and gets a unique id. We’ll use this id to check to responses to the individual images.

The Events don’t have to follow a linear path. You can make the flow of the experiment conditional. So for my design I made a sub layer on the main Event time line which holds the sets of images and sounds. The images in each set are randomised by a script and so are the sets themselves as we want to rule out the effect of order of the presentation. So in the picture you can see the loop containing a neutral stimulus, 6 landscape pictures with a sound and a questionnaire. This runs 5 times and goes to the Event announcing the end of the experiment. During the baseline measurement and the sets the heart rate of the participant is measured. And the answers to the questions belonging to each set are logged.

Data acquisition and storage is managed with the Reporter element. You can log all the variables used in the program and determine the layout of the output. After the trial you can export the data directly to Excel or a text or csv file. Apart from just logging the incoming heart rate values we calculated mean from them inside EventIDE for each image and for the baseline measurement. This way we can see at a glance what is happening with the responses to the different images.

For me it was kind of hard to find my way in the program. What snippet goes where, how do I navigate to the different parts of the experiment? But the more I’ve worked with the program the more impressed I’ve become. It feels really reliable and with the runs history you are sure none of your precious data is lost.

Virtual View: designing the first experiment

I had an idea what I wanted to research in my first experiment after reading the different articles. Looking at the end users, frequent visitors to hospitals and the chronically ill, I want the final piece to be first and foremost a pleasant and relaxing experience. It would be nice if there was an actual physical change that can be measured. The piece should have a stress reducing and restorative effect too. This can be both a subjective experience and a quantified measurement in form of heart-rate and heart coherence. And there are of course the landscapes and the sounds that should induce these states.

So how do you convert these goals into an experiment design? You follow a course and you ask people who have a lot more experience with designing psychological experiments!

I started out with way too complex idea. Combining stress induction and testing stimuli effects in one experiment. I’ve had great input from my professor Hein at the Open University, Sarah (PhD in psychology), Ilia (developer of stimulus creation software) and Malcolm (information scientist and psychologist) from Heartlive. Discussing my idea’s with them helped me a lot.

Together with the students I looked at the type of landscapes and sounds that would be most valuable to explore for the Virtual View installation. We’ve decided to test 5 sets with 6 landscape images based on, among other things, the most preferred landscapes as defined by Ulrich. We also explore the mystery aspect of landscapes as outlined in the attention restoration theory by Kaplan and Kaplan. Each set of images has a sound to go with it. We use one contrast set of neutral hospital interiors accompanied with hospital sounds. Another thing we want to explore is non photo realistic landscapes. As the final piece will consists of computer generated graphics with a certain degree of abstraction we want to compare the response to abstract landscape paintings to the photo realistic material.

From the little research that has been done on the effects of (nature) sounds we’ve come to different combinations of running water and birdsong. These are the sets and sounds {in curly braces}:

a. Preferred landscape with water element {running water}
b. Preferred landscape in autumn {repetitive bird calls}
c. Neutral hospital interiors {neutral hospital sounds}
d. Landscape with deflecting views {running water and melodious birdsong}
e. Preferred landscape as abstract painting {melodious birdsong}

While experiencing the stimuli the participants’ heart beat will be measured with the Heartlive sensor. This will give data in the form of beats per minute, inter beat interval and heart coherence. A questionnaire on the perceived relaxation state will give insight into how the different stimuli sets are experienced by the participants and how they effect their sense of relaxation.

We expect combination d) the have the most positive effect compared to the other sets: higher IBI values, lower BPM values and higher coherence and the most self reported relaxation. The neutral hospital interior we expect to score the lowest means on those variables.

The sets and the images in the sets are randomised for each participant. The sounds are attached to one set. The participants will see all the sets (repeated measures). In the end we’ll be able to compare the different means of all the sets.

In the next blog I’ll explain more about building the experiment in EventIDE, the stimulus creation software I mentioned above.