Visualising converstation

During the Dutch Design Week Awareness Lab conducted an experiment which consisted of a virtual tour through the future Meditation Lab. Visitors viewed a slide show and got an explanation of what it will entail to use Meditation Lab Experimenter Kit (MLEK).

After the tour we asked them open-ended questions like: Can you imagine using MLEK? Do you think the use of the PC is disturbing or contributing to your mindset and attitude? Do you have experience in meditation?
We collected 10 forms and got responses from 14 participants. We were touched by their involvement and interesting and valuable remarks. Remarks turned out to fall into a few areas of interest. Some remarks were mentioned only once others up to six times by different people.

DDW17poster

DDW visualisation of comments during the Meditation Lab virtual tour

You may download this A3 sized PDF: DDW17Viz

I would like to thank: the participants, Creative Ring Eindhoven, Meike Kurella, Hans d’Achard.

Single person experiments with light

A romantic dinner by candle light, bright lights in an office building. Both give us a very different experience. We all know from experience how light can influence our mood and the way we perceive a space.
What I want to find out with Meditation Lab is if light conditions can also influence the quality of your meditation experience. I have a hunch that it does. This is also based on over 20 years of daily meditation practice. And I’ve found starting points on optimal lighting during meditation in scientific research.

Building a meditation lab in my attic

Building a meditation lab in my attic

Conditions for a good meditation session

Contrary to a commonly held belief meditation isn’t about being relaxed and a little sleepy. I practice in the Buddhist tradition of Vipassana (insight) meditation. This form of meditation is about being fully present in the moment without effort. This clear observation will give a person insight into the true nature of reality. This insight will help to overcome suffering and to become a wiser and more compassionate being. An important concept in this context is the Satipatthana.
So the ideal state for a good meditation session is being relaxed but at the same time alert. I had heard about changing light conditions in classrooms to support different activities and states of mind of students. I was also wondering if work had been done on the psychological aspects of light. I’ll summarize my findings and tell about how I will be translating that into one person experiments.

Working with a light expert

Before diving into the theory I would like to explain how I will go about changing the light conditions. I was very fortunate be introduced to Tom Bergman. He is Principal Scientist at Philips Lighting. He has been working on what he calls Light instruments: LED light systems that can be programmed and played like a musical instrument. With his instruments he wants to go beyond mere functionality and use light for expression and experience. Our goals and explorations were a perfect match. I will be using his 9 x 9 mosaic instrument. It can make all colours and make beautiful and unexpected colour transitions. Also interesting is that it has been tested as tool for relaxation by master student Nina Oosterhaven (1). Her study showed for example that looking at changing patterns of light showed a significant reduction in heart-rate. So there are interesting starting points to work with the instrument.
The light instruments are of course very specialized and not commercially available. So Tom kindly also supplied me with a Philips Hue Go. This will enable me to try out similar settings with a consumer device which is already Internet of Things ready.

The lab set up: Light instrument, meditation mat and data server

The lab set up: Light instrument, meditation mat and data server

Types of light

Psychological effects

In the various articles I read I was looking for settings in light colour and intensity that would either relax or activate people and make them alert. There hasn’t been much research on the psychological effects of lighting. Seuntiens and Vogels(2) have done research on atmosphere and light characteristic in living room settings with a group of light designers. They looked at four types of atmospheres of which activating and relaxing are relevant for Meditation Lab. Interesting were their findings on the influence of colour temperature, brightness and dynamics on these atmospheres. In general the findings were: warmer (+/- 2700 Kelvin), static and less bright light (180 lux) is perceived as relaxing. Cooler (+/- 3800 K) and brighter light (390 lux) is perceived as activating this light can have a slow dynamic.

School performance

Sleegers et al (3) looked at school performance in children and students under adjusted light conditions. Their studies used build in light systems which had different settings. Focus, calm and engery are the most interesting for my project. Energy is an interesting setting, it is used in the morning or after mealtime to overcome sluggishness. The settings correspond with the following light properties (measured at eye-hight):
Energy:650 lux and 12000 K colour temperature
Focus:1000 lux and 6500 K colour temperature
Calm:300 lux and 2900K

Staying awake

Jacques Taillard et al (4) studied the effects of blue light on staying awake whilst driving a car at night. They compared the effects of continuous blue light to drinking coffee. When compared to a placebo both coffee and the blue light condition reported significantly less inappropriate line crossings with coffee doing only slightly better then blue light. The light source was a Philips GOLite with a wavelength of 468 nm. Luminance level was around 20 lux measured at eye level.

Research design

Sleepiness, tension and lack of focus are challenges you face when meditating. By experimenting with different types of light I want to find out if the findings in other areas can be used in a meditation setting. I will use warm white light for relaxation, cool white light for focus and blue light for alertness. I will be exposed to one light condition per 20 minute meditation session. Before and after every session I fill in the standardised questionnaires which I have designed. I have started single person experiments (n=1) and I have designed the following experiments.

Design single person experiments

Design single person experiments

There is no baseline measurement included in the single person meditation session. Instead I have conducted 54 baseline session under my usual meditation conditions. I did a 6 day solitary retreat at home. The sessions took place throughout the day, I didn’t manipulate anything, especially not the light conditions. So they varied widely as the day progressed.

Current findings

At the moment I’m conducting n=1 experiments using the Light instrument and the three main light states described above. I’ve set up a darkened lab to control the light conditions. I keep my eyes slightly open with my gaze turned down.
My first impressions are that there is a difference from what I normally experience during meditation. The white lights I find quite relaxing and somehow invigorating. The blue light I find less pleasant and a bit depressing. I suppose the light will interact with my overall state of focus, sleepiness and alertness as it fluctuates during the day. That is why I try to do the experiments at different times of the day while using the same light setting. I do worry a bit about my sleep when meditating in the evening in bright light. For that reason I have turned down the brightness (there a 5 settings) in an effort to not affect my sleep too much.

The single person experiments are my starting point. Later I will report on my design for group experiments. I’m always on the lookout for people who would like to join the experiments. So please leave a comment if you want to participate.

References
1) Oosterhaven, N. (2017). Fascinated by Dynamic Lighting. Thesis Master of Science In Human Technology Interaction
2) Seuntiens, P.J.H. & Vogels, Ingrid. (2008). Atmosphere creation: The relation between atmosphere and light characteristics. Proceedings from the 6th Conference on Design and Emotion 2008.PJC Sleegers, PhD, NM Moolenaar, PhD, M Galetzka, PhD, A Pruyn, PhD, BE 3) Sarroukh, PhD, B van der Zande, PhD (2013). Lighting affects students’ concentration positively: Findings from three Dutch studies. Lighting Research & Technology Vol 45, Issue 2, pp. 159 – 175
4) Taillard J, Capelli A, Sagaspe P, Anund A, Akerstedt T, Philip P (2012) In-Car Nocturnal Blue Light Exposure Improves Motorway Driving: A Randomized Controlled Trial. PLoS ONE 7(10): e46750.

How to test a meditation wearable?

I suppose the answer to that question is obvious but not so easy to realise: during a retreat. But still, that is what I did. Last week I spend 6 days meditating while at the same time putting my brand new wearable and software platform to the test.

It was snowing outside while I was doing my 6-day retreat

It was snowing outside while I was doing my 6-day retreat

What is it all about?

For those of you who missed it: the past 3 months I’ve been working on the Meditation Lab Experimenter Kit. The focus on those first months has been to design and develop a new Silence Suit wearable, improve the electronics and create a software platform (the Data Server) to log and explore the data.
The whole team has been working really hard to get the prototype ready for single user testing. It was quite exciting to put all the different parts together which have been developed by different team members on separate locations. I managed only just in time to get everything to work for the start of my self conducted retreat.

Data science

The main goal was to gather as much baseline data as possible. At a later stage I will try to influence my meditation through manipulating the light. But to really see the effects I need insight into how my ordinary meditation data looks. So German, our AI and data science expert, advised me to get as many 20 minute sessions as possible. I managed to do 54!
Things I wanted to know:
Do all the sensors produce reliable data?
How stable is the software platform?
How easy is it to use the wearable and the platform?
Will I enjoy using both?

Do all the sensors produce reliable data?

MLEK HR

Getting good heart-rate data was the biggest challenge

Because I had been working with most of the sensors in my first prototype I had a pretty good idea of what the data should look like. Programmer Simon had swiftly put together a script that could plot data from all the sensors in graphs. That way I could easily grasp the main trends. It immediately became clear that the heart-rate sensor wasn’t doing what I’d hoped. A lot of beats were missed, once even only 2 data points were collected in 20 minutes (and no, I was not dead).
Oddly enough the rest of the data was fine. I tried recharging the batteries and changing the ear clip but nothing worked and whether or not I’d get good data seemed unpredictable. Until the final day.
While looking at the graphs after I’d finished a session I casually rubbed my earlobe and it felt cold. I looked at the data and saw that the signal deteriorated towards the end of the session. Eureka! The blood flow to my earlobe was the problem, not the electronics.
Cold is a major influence but I also want to experiment with the tightness of the clip. It might prevent the blood from circulating properly.
So most sensors performed well, better even than I’d hoped. Unfortunately no data comes from the cute little PCB one of the students at Design Lab has designed and soldered. Also the soft sensor for detecting sitting down (also the start button) is still unstable.

Force sensor to measure pressure between fingers

How stable is the software platform?

The software runs on my old Dell laptop and Simon has installed the Lightweight X11 Desktop Environment (LXDE) on it. So it runs on Linux which was a new experience for me. But I like it, it is basic and simple and does what it should. To start the system I have to run the server for data storing and the adapter for communication with the hardware. I must say I am very impressed with the whole performance. There has been no data loss and the plots are great to get an impression of the session.

getSession

Data output from one meditation session

How easy is it to use the wearable and the platform?

I was pleasantly surprised by the comfortableness of the suit even after 10 sessions in one day. Putting it on with attention takes about 2 minutes and then you’re all set. You hardly notice that you are packed with 10 different sensors.
The pre and post qualitative forms are easy to use. At the moment I still have to use URLs to access certain functionality but everything works and that was such a relief. Plotting the data with around 5000 data points per sensor per 20 min session is hard work for my old Dell. But it gives me time to do a little walking meditation…

Maybe it is just me but I don’t mind filling in two forms for every session. I seriously consider every question and try to answer as honestly as I can.
Doing two or three session in a row is even easier. All I have to do is refresh the home page of the server and I can start another session.

Will I enjoy using both?

Well yes, using the system was a pleasant experience for me. I did learn that I should not look at the data before filling in the post meditation questionnaire because the data caused my mood to plummet. So it will be best to have the data summery after that has been done.

last Session

Session summary. The number of data points will be replaced by mean values.

I have a lot of confidence that the system will be useful and give a lot of insights. There is still a way to go until I can actually automate the light actuation intelligently. But the plots did show variations and now German can work his magic. I can’t wait to see what he will come up with.

Working on the sitting sensor

The most important step Danielle made this week is the formulation of the costumer journey. We distinguish four different kinds of users: The Plug-and-float, De-kleine-onderzoeker, the Lab manager and the QS-wizard. The Plug-and-float is the individual user who wants to improve the quality of her own meditation session. She focuses on looking at the data and improving the meditation through actuation. De-kleine-onderzoeker wants to do research about the environment. She wants to know which actuation has the most positive effect. She organizes experiments for herself or for a bigger group. The Lab manager maintains the suits for a bigger group. She is able to work on the sensors and the actuation by adding or removing sensors or actuators. The QS-wizard wants to make new applications by herself. For every kind of user, Danielle described the way how they have to use the software. This costumer journey is the starting point for the software.

On Monday Danielle went to ProtoSpace in Utrecht to meet the software engineer and the system architect to discuss the data server. We learned that the micro controller has to be programmed in a more modular way to make it future proof. This makes it more complex than we thought first.

Today we worked on the sitting sensor. The data we got from the old one where to unstable. The sitting sensor is the on/off button for the system. As soon as you are sitting it will be logging your session. But that also means that the whole session is interrupted as soon as the sitting sensor does not work. We knew that the surface of the sitting sensor has to be bigger so that it is no problem if you move a little. But the one we had was too big, so it was pretty expensive and not comfortable.

sitting sensor - conductive foil 15x7 cm

sitting sensor – conductive foil 15×7 cm

As you can see in our notes below the conductive foil was 15×7 cm first. Before sitting the
value was about 880 or 860. While sitting it was about 140. That is a big range which makes it possible that you can move while meditating without interrupting the session. We cut the conductive foil in half to test if it would work if it was smaller. You see that the range of the value became much smaller and the sensor was actually too unstable again. This might have also been caused by lack of conductivity. We cut off the tape as you can see below.

sitting sensor - conductive foil

sitting sensor – conductive foil 15×3,5 cm

notes sitting sensor - conductive foil

notes sitting sensor – conductive foil

But we thought maybe the conductive cloth we have conducts better than the conductive foil, so that the small one of 15×3,5 cm would be enough to get a bigger range. We just tried and as you can see our experiment was successful. With the conductive cloth from 15×3,5 cm we got the best values with the biggest range ever. For now this one is our choice. Next week we have to work on how you can include it in the suit.

conductive cloth

sitting sensor – conductive cloth 15×3,5 cm

notes sitting sensor - conductive cloth

notes sitting sensor – conductive cloth

I see how difficult it is to be the team leader. Danielle has the vision.
She wants to reach her goal, but sees how ambitious it is. It seems very difficult to me to stay true to your own vision if there are still organizational problems you have to solve. We try to formulate a common vision so that every team member knows our plans. This vision has to be the base everyone is familiar with so that every team member goes for it. But I am optimistic: Together we will get there!

Bewaren

Bewaren

Bewaren

Bewaren

Bewaren

Virtual View: statistics for experiment 3

In experiment three I wanted to see if adding movement to visual content had a bigger lowering effect on heart-rate and subjective stress then just using a still. And I wanted to know if variables like heart-rate and skin conductance could be restored to or below the baseline following a stress stimulus. Sound accompanied the visuals and I used the same soundtrack for both conditions.
The animation consisted of a main landscape layout with different animated elements over-laying that scene. The landscape consisted of a blue sky with white clouds slowly moving over it. Three hills with shrubs in different shades of green and a blue water body with a cream coloured shore. The animations were started mostly in sequence so there were just one or two animated elements to be seen. This is aside from the clouds and the waves on the water body, they were visible most of the time.staticAnimationStimulusAnimation still used in condition 2

Other animations are: big and small flocks of “birds” consisting of 150 and 5 “birds” respectively. They move in random directions within the frame. Blossom leaves flying from one side of the screen to the other. This animation also included a bee flying from one side of the screen in a slow, searching way. A final animation element are the butterflies. They flutter near the bottom of the centre of the screen and disappear after a random time span. The visuals are not realistic but simplified and based on the style of the old Japanese woodblock prints.
The sounds are inspired by nature but underwent a lot of computer manipulation. The sound is carefully synced with the imagery and movements on the screen.
In both conditions I measured subjective tension (7 point likert scale), heartbeats per minute, heart-coherence and skin conductivity. The experiment consisted of three stages: a baseline measurement (5 minutes), a cognitive stress task (around two minutes), the audio and visual stimulus part (5 minutes). Subjective tension was measured before the baseline measurement, after the stress task and after the stimulus. For a full description of the lab setup and experiment view the previous post.

Sample
The sample consisted of a total of 33 participants, more women then men (75% over 25%), this frequency was the same for both conditions. They were mainly recruited from the art centre where the experiment took place, there were a couple of students and some members from the general public. They were randomly assigned to each condition. The maximum age was 71, the minimum was 20 (mean 41,1). One dataset was corrupt so I ended up with 16 (mean age 39,6) participants in condition 1 (animated landscape) and 16 (mean age 42,7) in condition 2 (landscape still).

Correlations
I’ve used SPSS 20 to calculate the statistics. I was curious if the heart-rate or heart-coherence would correlate with the subjective tension and/or the skin conductance. I could find very few significant correlations between the different variables. There are only significant connections between the different measurements of one variable. So the beats per minute (BPM) of the baseline measurement correlates with that of the cognitive stress task measurement and of the stimulus (landscape) measurement. The same is true for the Gavanic skin response (GSR) and the heart-coherence (HC). The only interesting correlation I found was a negative correlation between the baseline HC and the self reported tension (SRT) of the baseline and the stimulus. The could indicate that, assuming that heart-coherence is a measure of alert relaxation, perceived tension at the start and during the task the opposite of this alert relaxation state. But the correlation is weak (-496 and -501) so not much can be concluded from that.

Condition comparison
Before comparing conditions (with or without motion) I had to check if the stress stimulus had worked and if there was an effect for the audiovisual stimulus in general. Below you see an overview of the variables self reported tension (SRT), beats per minute (BPM), heart-coherence (HC) and galvanic skin response (GSR). The values for these variables are the mean values for the duration of the different parts of the experiment: Baseline (t1), cognitive stress task (t2) and stimulus (audiovisual material, both conditions) (t3). You can also see the expected direction of the variables. The significant values are printed in green.
results
From the table you can tell that there is a significant difference between the baseline measurement and the cognitive stress task on the one hand and between the stress task and the stimulus. This is true for BPM, GSR and self reported tension. All values rose during the stress task and decreased during the stimulus presentation. As those measures are strong indicators for stress this indicates that the stress task worked and the tension showed significant variation during the experiment. Heart-coherence shows no significant changes.
For the heart-rate there was even a significant lowering of the mean compared to the baseline. Indicating that the BPM was even lower the when participants entered the experiment.

Of course I wanted to test if there was a difference in the variables between conditions, that way I could see if animation was more effective then using only a static image. As you can see from the table there were no significant results for either of the conditions apart from the skin conductivity (GSR). The skin conductivity is a measure for arousal, the more aroused the higher the value. I would expect the GSR to be low at the start, high during the stress task and again low during the stimulus presentation. The GSR values for the stimulus presentation were significantly lower then during the stress task but they were still significantly higher then during the baseline measurement. This indicates that the GSR levels haven’t gone back to the baseline let alone become lower then the baseline state. This might be due to the fact that it takes more time for the skin activity to go back to normal. The response is slower than for heart-rate measurements.
We can see a reduction in heart-rate for both conditions with a bigger reduction in heart-rate for the animation condition. But neither of these changes are significant.
For the self reported tension we see a significant lowering in the tension from the higher values during the stress task and stimulus presentation. This means that people felt significantly less tense watching the landscape than during the stress task. The perceived tension was also lower in the animation condition than during at the start of the experiment though not significantly so. We don’t see this effect in the static condition. For this condition the baseline was lower and the effect of the stress stimulus was stronger. The overall variation was bigger. So you can’t really draw any definitive conclusions from this data other then that the landscapes reduced arousal in both conditions.

Overall lack of significance of many of the variables in either conditions may be caused by small the sample or it may indicate that there isn’t enough difference between the conditions for it to be significant. This might be caused by the way the stimuli were presented. For the sound we used a high quality active noise cancellation headphone. The impact of the sound was big. The screen image on the other hand was rather small (84,5 x 61,5 cm). The effect of the visuals might therefore be less strong in comparison with the high impact of the sounds.

I was of course also interested in the overall differences between the conditions, especially for the landscape stimulus. When comparing the different measurement moments for BPM we can see that in every moment the heart-rate in the static image condition is lower. So the participant in the first condition already started out with a much higher heart-rate. During the stress task the difference is even bigger and during the landscape presentation the differences have become smaller. I had expected that the heart-rate in the first condition would be lower but the differences are so big to begin with that you can’t draw any conclusions from it.

So does animation have a more positive effect on heart-rate, heart-coherence, skin conductance and self reported tension? I’ve looked at the interaction between all these variables and animation but on non of the variables the effect is significant. The major effects are on heart-rate. A bit to my surprise there are absolutely no effects on heart-coherence. In the first condition we see even a (non-significant) lowering of coherence during the animation. I’m therefore not going to use this value to drive my animation as was my original intention.

Scene comparison
While analysing I got curious to see if there are differences between the scenes of the animation and sound in condition 1 and 2. The animation and accompanying sounds can be divided into 10 different scenes. During the construction of the video I tried to incorporate various animation elements. They become visible one after the other.
I looked at the effects on mean heart-rate because it showed the most results. I wrote a script to calculate the mean heart-rate for every scene and for both conditions. The results are show in the graph below.
scenesCompare

The variations between the scenes were not significant for the sound with still condition but they were at two points for the animated condition. You can view stills of the scenes below. There was a significant reduction in heart-rate of 4,8 between scenes 1 (mean 76,6) and 2 (mean 71,8). And a significant reduction in heart-rate between scenes 1 and 9 (mean 71, 5) of 5,1. This could suggest that more is happening to the participants in the animation condition and that animation has more potential for influencing the heart-rate of users.

allscenes
Stills from the 10 different scenes

Virtual View: results experiment 2

The analysis of the second experiment has taken a long time. At first there appeared to be no significant results on any of the variables, except on the heart-rate during the cognitive stress task. So I consulted different people with a degree and research experience and asked for help. I’ve really learned a lot from them. They all have different approaches and ways of working so I’ve picked out all the good tips and insights. My thanks go to Sarah, Malcolm and Marie. The latter is a researcher at Tilburg University, her knowledge of statistics dazzled me. She is the one who recommended a different analyses which has yielded more significant results.

Research questions

My main questions for this experiment were: Which type of stimulus results in the most stress reduction and relaxation? And which stimulus produces the highest heart-coherence? As I want the values on that variable do drive my landscape animation.
To test these questions I used the following dependant variables: BPM, heart coherence,
later I introduced heart-rate variability calculated from inter-beat interval, self reported stress and self reported relaxation. These were measured during the baseline measurement, the cognitive stress tasks and the stimulus sets.
The independent variables are: stimulus set 1 with 12 landscape photographs and synthetic nature sounds, stimulus set 2 with more abstract landscapes styled by me with the synthetic natures sounds in the background. Stimulus set 3 consisted of 12 photographs of kitchen utensils and a soundtrack of someone preparing a salad. The expected direction of the variables will be explained below.

Analyses

After struggling for some time with the non-significance of the variables in the different sets I discovered that the randomisation of the sets hadn’t been ideal. There were 33 participants who viewed the sets in 6 different orders. On top of that the group size per order was different. Some groups were only 4 participants, others 10. This is something I’ll have to take into account in my next experiment.
I used a repeated measures analyses. On my first, non-significant results, I used my baseline measurement as a covariate. Marie said that wasn’t the way to go. So I used just repeated measures where the baseline is just the first measurement, no covariates. And I did post hoc analysis (Bonferroni) to see the differences between the set results.
This is an overview of the results:
Results overview
Sarah made this clear lay-out of the research results compared to the expected results.
As you can see from the blue results the subjective stress measurements are significant compared to the baseline for all three stress tasks. For the first stress task (note this task isn’t connected to set 1, it is just the first task after the baseline measurement) the difference in heart-rate is significant during the stress task. There is also significant difference in HR during the landscape set. The kitchen utensil set heart-rate is also significant. Even though the heart-coherence is has the right direction non of the changes are significant. There are also no significant differences between the subjective relaxation questionnaires.

On Sarah’s recommendation I also looked at correlation between all the variables. That is very interesting as it reveals relationships between the variables. As the subjective relaxation questionnaire didn’t show any significant results I was curious to see how it correlates with the stress questionnaire. There should be a significant negative correlation between the two. And there is, it is especially strong for the baseline stress measurement and all relaxation measurements. On the other hand there was no correlation between subjective relaxation and heart-rate, a lowering of this value may be considered an indication of relaxation. All in all the relaxation questionnaire doesn’t give convincing results. There was a very strong correlation between heart-rate and heart-rate variability. In fact too strong, as Sarah pointed out they measure the same thing so it has no use including this variable in the results.

First set

As the stress stimulus was strongest the first time (view below) Marie advised me to do an analyses on the first set that was shown after the first stress task, independent of what kind of stimulus set it was. This was the distribution: set 1:    shown 11 times; set 2: 14; set 3: 8. The results from this analyses completely matched the other results. Heart-rate and heart-rate variability are significant (this is of course an average of all three sets shown), heart coherence and self reported relaxation were not. There was no interaction effect between the set shown and either the heart-rate or heart-rate variability which suggest that the order has no effects on the results.

Graphs

Stimulus overview

I made some manual graphs to see the effects of the stimuli on heart-rate and heart coherence side by side. There is no significant difference between the pictures in the three sets. For me it is still nice to see the difference between the pictures. The graph is done manually in Photoshop.

Stimuli used

HRV

After getting advice from Malcolm  about the results he suggested I calculate heart-rate variability from the inter-beat interval values that I’d logged. Heart-rate variability is known to correlate well with stress (negative) and relaxation (positive). So that’s valuable information to add to the results.
I wrote a script in Processing to calculate and visualise the HRV for the whole experiment divided in a 5 second windows. The white line is the baseline measurement, the red lines are the stress inductions and the green lines are the audiovisual stimuli. You can tell from the image that the stress induction has some effect.

HRV results
Looking at my correlation table however there is only a significant negative correlation between the baseline subjective stress measurement and the HRV. Neither the other stress measurements nor the subjective relaxation measurements show any correlation.
It is hard to tell from the image but the photo realistic landscape set has a significant difference from the baseline measurement. The third set is almost significant (p = .054).

Conclusions

The first conclusion should be that the differences between the stimuli in the sets are small. There are significant* differences in average heart-rate between the sets (68,26 (baseline); 66.46* (set 1); 66.75 (set 2); 66.32* (set 3)). But the differences are really small. There is reduction in all the sets. Set 3, the kitchen utensils has the lowest average.  The set isn’t very stimulating which might explain the low heart-rate. This conclusion is also backed by the fact that the results from using only the first set shown are comparable to when working with the individual sets.
Heart coherence, which I want to use for driving the animation and to trigger the interaction with the installation showed that the styled landscapes with sounds had the highest heart coherence average but the results were not significant.  It does not seem a good measure for pure relaxation. Heart coherence is a difficult term but this description gives a good indication of the different aspects of this state: In summary, psychophysiological coherence is a distinctive mode of function driven by sustained, modulated positive emotions. At the psychological level, the term “coherence” is used to denote the high degree of order, harmony, and stability in mental and emotional processes that is experienced during this mode. (From The Coherent Heart p. 12, Mccraty, Rollin Ph et all, Institute of HeartMath). On page 17 of that document it states that: “In healthy individuals as heart rate increases, HRV decreases, and vice versa.” As HRV and coherence are closely linked the same is true for heart coherence. Even though heart coherence is much broader than relaxation is also encompasses activation of the parasympathic nervous system which is also a marker for relaxation. Important in heart coherence is the inclusion of positive emotions. This is what I try to evoke by using landscapes based on generally preferred landscapes.
The Virtual View installation should provide a relaxing distraction for people in care environments. Cognitive states that relate to this goal are soft fascination and a sense of being away as introduced in the Attention restoration theory (ART) by Kaplan and Kaplan. I’m guessing now that heart coherence might correlate with those cognitive states. This is something I will explore in the next experiment.

The stress task was perceived as stressful judging from the subjective reports. These findings are partly backed by the physiological data. Only the heart-rate of the first stress task differs significantly from the baseline. Our goal by introducing a stress task was to create bigger differences in heart-rate. For that to be successful the stress task should really produce stress. Although people reported feeling stressed we can’t measure it three times in a row. So for the next experiment I’ll work with 3 groups who will all get only one stress stimulus and one landscape stimulus.

All in all this experiment doesn’t prove that my styled landscapes with synthetic nature sounds create the most relaxation and heart-coherence but the results neither prove that they don’t. So for the next experiment I’ll continue with the styled landscapes and introduce animation.

Virtual View: conducting experiment two

Our ideal for the execution of the second experiment was to have 60 participants of 40 years and older. There would be two labs where the experiment would be held in alternating rooms over 3 days. The rooms would be in a quite part of the school as we had quite a lot of disturbance during the first experiment.

The first setback was the location. It wasn’t possible to have two classrooms for three days at the same time. There weren’t any rooms available in a quiet part of the school. Eventually there was no other choice then to use a room in the middle of the busy documentation centre and spread the experiments out over 5 days. The room was a kind of aquarium, it was very light and you could see people walking around through the glass walls. During the test there was disturbance from talking and students opening the lab door by mistake. So far from ideal.

But my main disappoint was with the sample. Only one day before the start of the experiment the students notified me that they had managed to only get 20 participants instead of the 60 we had agreed upon. We were mostly depending on the teachers for participation but it was the period of the preliminaries and they were very busy. Also the trial would now take 40 minutes instead of the 20 to 30 minutes the first experiment took. Had I known earlier I could have taken steps and come up with a suitable solution.
As it was I had to improvise. I had to let go of the control group and had to broaden the age range. In the end 6 students of below 30 years old took part. I asked around in my own network and managed to recruit 10 people in the right age group. In the end we tested 40 people, all of whom were exposed to the stress stimulus.

Unfortunately not all the results were valid and useful. Some data was lost due to technical problems. Also quite a number of people made mistakes with filling in the questionnaires. We now had two questionnaires, one for self reported stress and one for self reported relaxation. The stress questionnaire contained one question in the positive direction (I feel everything is under control) and two negative items (I feel irritated, I feel tense and nervous). Both had to be reported on a 10 point scale.stressQuestionnaireApparently this was confusing for some people and even thought notes were taken it wasn’t always possible to reconstruct the correct answer. In the next experiment will put also some text below the numbers to indicate the value.
There were also two very extreme results (outliers), they can’t be included in the data set as they would mess up the averages too much. So I ended up will 33 data sets I could use for my analyses.

But first the data had to be sorted and structured. It took me quite some time to streamline the copious EventIDE output into a useful SPSS dataset.

The baseline measurement included self reported stress (pink), heart-rate (orange) and heart-coherence (red) and self reported relaxation (green). baselineOutput
The three answers from all the questionnaires had to be combined into one value and checked for internal validity in SPSS.

It’s nice to take a look at a part of the results from the cognitive stress task:
cognitiveTask
From the output you can see exactly what the sums were, how much time it took to make them, what the answer was and if the given answer was correct or not. I didn’t use this data but it would be nice to see if for example participants with more faults have higher heart-rates. Heart-rate (orange) and heart-coherence (red) are again below the results.

Before each stimulus set there was the stress questionnaire and after each set the relaxation questionnaire. The output for each set, which consisted of 12 pictures with sound is laid out as follows:
Picture count | set number | image id | image name | inter beat interval | BPM | heart-coherence
setOutput
Each picture was shown for 20 seconds and the heart data was logged around four times per second. The output for one picture looks like this: 60.6|60.5|60.4|60.9|61.2|61.5|61.7|61.8|61.9|61.9|61.9|61.9|61.9|61.9|61.8|62.6|63.1|63.5|63.7|63.5|63.3|63.2|63.2|63.1|63.7|63.8|63.9|63.4|63.1|62.9|62.7|63.1|63.5|63.6|63.6|63.7|63.7|63.8|63.8|63.8|63.4|63.2|62.9|62.8|62.9|62.9|62.9|62.9|62.6|62.2|62.1|61.8|61.5|61.3|61.2|61.1|61.1|61.0|60.9|60.8|61.1|61.3|61.4|61.5|61.6|61.6|61.7|61.7|61.7|61.5|61.3|61.3|61.2|61.2|61.0|60.9|60.8|61.0|61.1|61.2|61.2|61.2|61.3|61.3|61.4|61.4|61.0|
This yields an average of 62.1 which is the output I used. But it is good to have all this data for each individual image. All the image averages had to be combined in a set average so I could easily analyse the differences between all three sets. I’m still analysing the data. More on that in my next post.

Virtual View: design of experiment two

After conducting and analysing the first experiment some points of improvement emerged.

  • The differences in heart-rate between the sets weren’t significant so we want to create more extremes in heart-rate.
  • One group will get a heart-rate enhancing trigger and there will be a control group that won’t.
  • There was evidence of interaction with age for some of the variables so we want a more homogeneous age group to work with.
  • The experiment should be simplified, less sets and keep sounds the same for the landscapes sets. The duration of each stimulus was rather short so we want to try to double the amount of pictures in each set.
  • The control set should be neutral instead of negative.

It was clear that we want to introduce stress into the experiment. The target group are patients who visit hospitals. They are under stress a lot of the time. So we have to create a stress stimulus. Together with the students and a teacher from Avans Hogeschool we looked into some of the known possibilities for inducing stress. Our idea was to simulate a hospital through minor medical treatments. But we realized this will probably not work with our sample. They will be teachers with a background in nursing so taking blood pressure won’t upset them. I also discussed some options with Malcolm and Sarah. Showing parts of horror movies we considered to be too subjective. The best option is physical stress in the form of electro shock or ice-water. But this is out of our league. We don’t have the knowledge or experience to conduct an experiment like that.

Finally I settled for cognitive stress task in the form of calculations. As the stress task had to be repeated we needed a stimulus that would remain a challenge and induce some stress. Cognitive tasks have that ability. To keep it challenging there should be different levels to also keep it interesting for people whore a good at doing calculations. I made a little design.

cognitive Task Design

I had no idea how this design could be implemented in EventIDE. So I send my sketch to Ilia who programmed a nice interface. On the one hand the subtractions would get more difficult if your answers are correct. On top of that the allotted time will decrease if there are three correct answers in a row.

cognitive Task
The design is a 2×3 factorial with repeated measures. There will be three landscapes/objects with sounds and each of them will be experienced either with or without a stress stimulus preceding them.

Factor design

The flow of the experiment is as follows:
Design Experiment 2
Depending on how long it would take for the participants to complete the questionnaires the duration of the entire experiment will be around 30 minutes.

For me it was kind of hard to include the different questionnaires. We wanted to check experienced relaxation as we did in the first experiment. But we also wanted to know how much stress participants had experienced during the stress task. As these are opposite experiences I found it hard to find a place for both in the flow of the experiment. I finally settled for checking for self-reported stress right after the cognitive task and reporting relaxation after the landscape stimuli.

I would have loved to measure physiological stress data. Apart from heart-rate and heart-coherence there was no objective data. I discussed it with Malcolm. He kindly offered to lend me some of his equipment but we realized we just didn’t have enough time to implement it properly. So for now I just have to do with the heart-rate and self-reporting.

The dependent variables in this experiment are:
Heart-rate (beats per minute & inter beat interval)
Heart-coherence
Self-reported stress
Self-reported relaxation
The independent variables are:
Photo realistic landscapes & synthetic nature sounds
Styled landscapes & synthetic nature sounds
Kitchen utensils & kitchen sounds
Age and gender

The sample will consists of 30 + 30 participants older then 40 years without heart problems or heart medication.

Finally I could work on my own creations. This was the time for me to test some of my first sketches of the Virtual View landscapes. They are a combination of computer graphics made in Photoshop and computer generated images made in Processing. I combined them into bitmaps. As we wanted participants to be exposed longer to the stimuli we doubled the amount of pictures in each set from 6 to 12. The inspiration for the landscapes came from our literature study and the results of the first experiment. As an artist I wanted to see what I can leave out and still have a relaxing effect. I also experimented with different techniques to create the image elements.
styled landscape
The photo realistic images were chosen to resemble the styled image and have the same simple layout. The idea was to see it there would be a difference in relaxation and stress reduction effect between the computer graphics and the photographs.

Photo realistic landscape

Our initial idea for the neutral images was again to use interiors. We thought of general school areas. But as we were approaching the end of the year the teachers would be pretty highly strung and seeing pictures of the school might not be neutral for some. So we decided to use kitchen utensils. For the sound we used a recording of someone preparing salad.
The sounds to accompany the landscapes were produced and composed by Julien Mier.  For us this was also the first sketch of what Virtual View could sound like. Julien made some nice synthetic birds and bees. We worked towards a piece that was a mix of background noise, silence and unexpected animal noises. The sound was timed to the transitions between the images in the experiment. So every 20 seconds a new piece of sound was started with different accents. We used the same soundtrack for both landscape sets.

Virtual View: results experiment one

In this post I want to give an overview of the results of the first and I will spare you the heavy statistic speak. So don’t expect a scientific article. The data is there and I may write a proper article one day but it isn’t appropriate for this blog.

Together with Hein from the Open University I looked at the data from the first experiment. This is an exploratory experiment so we’re looking for trends and directions to take with us to the next step.
The students did a splendid job organizing the dataset. For each participant there was basic demographic data (gender and age) means and combined means for the perceived relaxation questions, the separate images and images combined in sets. For each set there are means for: beats per minute (BPM), the inter beat interval (IBI) and heart-coherence.
To make sure our self constructed questionnaire was valid I did a scale reliability test. All the sets had good reliability for all 5 questionnaires. This just means that there is an internal consistency between the questions. The questionnaire it self isn’t validated for measuring relaxation. We just asked the three questions.

We did 4 analyses on the four variables: perceived relaxation (measured with the questionnaires), BPM, IBI and heart-coherence.
The stimuli sets were {sound}:
1. Preferred landscape with water element {running water @ 48 Db}
2. Preferred landscape in autumn {repetitive bird calls @ 47 Db}
3. Preferred landscape as abstract painting {melodious birdsong @ 56 Db}
4. Neutral hospital interiors {neutral hospital sounds @ 48 Db}
5. Landscape with deflecting views {running water and melodious birdsong @ 43 Db}

Self-reported relaxation

self-reported relaxation

self-reported relaxation, sets 1 to 5. Green is females, blue is males

The three questions we asked after the baseline measurement and after every stimulus set were: I feel at ease, I feel relaxed, I feel joyful and happy. Reported on a scale of 1 to 10. The three questions were merged into a relaxation scale. The hypotheses was that the overall relaxation scale would be lower for the hospital interior set (d) than for all of the landscape sets.
There was a significant effect for relaxation. As you can see from the graph set number four (hospital interiors) shows a distinct decrease of the sense of relaxation. Although the abstract paintings also score lower, this trend is mainly caused by the dip in relaxation scores on the hospital set, this confirms our hypotheses.

There was also something going on with the interaction between age and relaxation. To gain more insight into what’s happening with the age effect I looked at the data and noticed there are two clear groups: 25 years old and younger and above 39 years. The groups are about the same size (young 15, older 18). There were no participants of the age between 25 and 39 years. To test for the significance of the relaxation for the two groups I ran a test that showed that for the young participants the relaxation effect isn’t significant but for the older participants it is.

relaxation divided by age group

relaxation divided by age group. Blue is older.

Heart-rate
For the heart-rate we used two measures based on the same data: beats per minute (BPM) and inter beat interval (IBI). So it doesn’t make a difference which data analyses I discus here. The hypotheses was that the BPM would be higher for the hospital interior set (d) than for all of the landscape sets.
There we no significant differences between the sets. Our hypotheses has to be rejected.

heart-rate for men and women

heart-rate for men (blue) and women (green)

But there is again something going on with age, this time in relation to heart-rate. Looking at the graph below it is clear that the heart-rate in reaction to the landscapes and sounds is at odds for set two and set four. The older and younger people react quite differently.

Beats per minute for two age groups

Beats per minute for two age groups. Younger is blue.

Heart coherence
The hypotheses for heart coherence was that the coherence level would be lower for the hospital interior set (4) than for all of the landscape sets.

Heart-coherence for men and women

Heart-coherence for men (blue) and women

There is a significant trend for the age coherence interaction. Looking at the graph we can see that the coherence for the women is almost the same over the 5 sets but higher then the baseline coherence measurement. The men show a much more varied response and on average a lot lower then the baseline measurement. It is interesting to note that the abstract painting set, number 3 has a very high score for the men.
Looking a bit deeper into this trend there is again a relation to age. For the younger participants there was no significant difference between the sexes where heart-coherence is concerned. The graph of the older participants shows a significant difference between men and women. The older men cause the interaction-effect between gender and heart-coherence.

Difference in heart-coherence between older men and women

Difference in heart-coherence between older men (blue) and women

So although the average heart-coherence for the hospital interior set (4) is at the lower end for both men and women the effect isn’t convincing in view of the other scores of the other sets. The results don’t support the hypotheses.

Conclusions
For an exploratory first experiment the analysis has yielded some interesting results. The main hypotheses that the self-reported relaxation, heart-coherence, BPM would be lower  for the hospital interior set (4) than for all of the landscape sets is partly supported.
The self-reported relaxation and the heart-coherence showed significant results.

The lack of significance for heart-rate may be due to the small group or may suggest that the differences between the sets wasn’t big enough. To influence this I want to reduce the amount of sets in the next experiment and introduce a stress stimulus to create more contrast between the states of the participants.
Judging from the analyses it is clear to me that for next experiment the age should be more homogeneous.
For me the most surprising and promising was the high heart-coherence of the men on the abstract paintings. People were skeptical about using these abstract stimuli as there is not much support in literature that non-realistic images have any effect on viewers. Of course this will require more research but it is an interesting and unexpected result.

Virtual View: conducting the first experiment

Now that the research goal is clear, the stimuli are collected and the methods are clear and integrated in the EventIDE experiment it was time to look for participants. We needed at least 30 participants equally divided between men and women. Avans Hogeschool  has thousands of students and staff so we didn’t expect that to be a problem. The students wrote an inviting message on a digital notice board asking people to participate but only got two reactions. Enter the next strategy: walking up to anyone they met and just ask them to take part. That worked a lot better and most of the participants were recruited in this way. Some of the classmates were invited through text messages as well. In the end 33 participants took part, a mixture of students and staff.

Photo by Carlos Ramos Rodriguez

The students arranged the lab set-up and together we determined the protocol. The lab was a small classroom with a smart board with speakers. The students cleared most of the room, leaving it clutter free. The table was installed at a distance of 250 cm from the smart board. The projection was 154 x 108 cm. For the record I checked the sound levels of the different sets in the lab set-up with my decibel meter. They might have a strong influence so it is good to know at what average levels the sounds were played.

The sound level during the baseline measurement (no sounds were played) was 33 decibel. The autumn set with repetitive bird sounds 47 decibel, deflecting vistas with birds and running water sounds 43 decibel, hospital interiors with hospital waiting room sounds 48 decibel, standard preferred landscape with running water sounds 48 decibel and abstract landscape paintings with melodious bird songs 56 decibel.

Sketchup made by students Avans

The students lead the experiment, I came for the first couple of trials to taste the atmosphere and give some tips. At arrival people were welcomed and asked to turn off their phones. We also asked if they’d been to the bathroom. Because we use quite a lot of running water sounds and the experiment lasts around 20 minutes this might become an issue for people. We didn’t want them to get distracted because they needed to go to the bathroom and couldn’t. The sensor was placed on the earlobe. Participants were explained the course of the experiment and told that all data was anonymous and that they could leave at any time should they feel the need to end the experiment.

Participant id, age and gender were entered by the experiment leaders and then the participants were left alone with the stimuli and the questions.

As soon as the experiment was over the leaders would enter the lab for removal of the sensor and debriefing. Most participants were enthusiastic about the experiment and agreed to take part in the next experiment.

The next step is analysing the data, I can’t wait for the results!