Hypnosis may still be veiled in mystery – but we are starting to uncover its scientific basis

File 20170419 2414 aevd9v
On the count of three, you will forget this ever happened. Everett Collection/Shutterstock

Devin Terhune, Goldsmiths, University of London and Steven Jay Lynn, Binghamton University, State University of New York

This piece was originally published in The Conversation

Some argue that hypnosis is just a trick. Others, however, see it as bordering on the paranormal – mysteriously transforming people into mindless robots. Now our recent review of a number of research studies on the topic reveals it is actually neither. Hypnosis may just be an aspect of normal human behaviour.

Hypnosis refers to a set of procedures involving an induction – which could be fixating on an object, relaxing or actively imagining something – followed by one or more suggestions, such as “You will be completely unable to feel your left arm”. The purpose of the induction is to induce a mental state in which participants are focused on instructions from the experimenter or therapist, and are not distracted by everyday concerns. One reason why hypnosis is of interest to scientists is that participants often report that their responses feel automatic or outside their control.

Most inductions produce equivalent effects. But inductions aren’t actually that important. Surprisingly, the success of hypnosis doesn’t rely on special abilities of the hypnotist either – although building rapport with them will certainly be valuable in a therapeutic context.

Rather, the main driver for successful hypnosis is one’s level of “hypnotic suggestibility”. This is a term which describes how responsive we are to suggestions. We know that hypnotic suggestibility doesn’t change over time and is heritable. Scientists have even found that people with certain gene variants are more suggestible.

Most people are moderately responsive to hypnosis. This means they can have vivid changes in behaviour and experience in response to hypnotic suggestions. By contrast, a small percentage (around 10-15%) of people are mostly non-responsive. But most research on hypnosis is focused on another small group (10-15%) who are highly responsive.

In this group, suggestions can be used to disrupt pain, or to produce hallucinations and amnesia. Considerable evidence from brain imaging reveals that these individuals are not just faking or imagining these responses. Indeed, the brain acts differently when people respond to hypnotic suggestions than when they imagine or voluntarily produce the same responses.

Preliminary research has shown that highly suggestible individuals may have unusual functioning and connectivity in the prefrontal cortex. This is a brain region that plays a critical role in a range of psychological functions including planning and the monitoring of one’s mental states.

There is also some evidence that highly suggestible individuals perform more poorly on cognitive tasks known to depend on the prefrontal cortex, such as working memory. However, these results are complicated by the possibility that there might be different subtypes of highly suggestible individuals. These neurocognitive differences may lend insights into how highly suggestible individuals respond to suggestions: they may be more responsive because they’re less aware of the intentions underlying their responses.

For example, when given a suggestion to not experience pain, they may suppress the pain but not be aware of their intention to do so. This may also explain why they often report that their experience occurred outside their control. Neuroimaging studies have not as yet verified this hypothesis but hypnosis does seem to involve changes in brain regions involved in monitoring of mental states, self-awareness and related functions.

Although the effects of hypnosis may seem unbelievable, it’s now well accepted that beliefs and expectations can dramatically impact human perception. It’s actually quite similar to the placebo response, in which an ineffective drug or therapeutic treatment is beneficial purely because we believe it will work. In this light, perhaps hypnosis isn’t so bizarre after all. Seemingly sensational responses to hypnosis may just be striking instances of the powers of suggestion and beliefs to shape our perception and behaviour. What we think will happen morphs seamlessly into what we ultimately experience.

Hypnosis requires the consent of the participant or patient. You cannot be hypnotised against your will and, despite popular misconceptions, there is no evidence that hypnosis could be used to make you commit immoral acts against your will.

Hypnosis as medical treatment

Meta-analyses, studies that integrate data from many studies on a specific topic, have shown that hypnosis works quite well when it comes to treating certain conditions. These include irritable bowel syndrome and chronic pain. But for other conditions, however, such as smoking, anxiety, or post-traumatic stress disorder, the evidence is less clear cut – often because there is a lack of reliable research.

But although hypnosis can be valuable for certain conditions and symptoms, it’s not a panacea. Anyone considering seeking hypnotherapy should do so only in consultation with a trained professional. Unfortunately, in some countries, including the UK, anyone can legally present themselves as a hypnotherapist and start treating clients. However, anyone using hypnosis in a clinical or therapeutic context needs to have conventional training in a relevant discipline, such as clinical psychology, medicine, or dentistry to ensure that they are sufficiently expert in that specific area.

We believe that hypnosis probably arises through a complex interaction of neurophysiological and psychological factors – some described here and others unknown. It also seems that these vary across individuals.

But as researchers gradually learn more, it has become clear that this captivating phenomenon has the potential to reveal unique insights into how the human mind works. This includes fundamental aspects of human nature, such as how our beliefs affect our perception of the world and how we come to experience control over our actions.


Tricking the brain: how magic works

23_GustavKuhn.jpgGustav Kuhn is a Senior Lecturer at Goldsmiths, University of London. The main focus of his research is attention and awareness and in particular how attention and eye movements are influenced by social factors. Related to this, he has a keen interest in the science of magic and use magic to investigate a wide range of cognitive mechanisms, such as attention, memory, illusions, and beliefs. Read on…

This article was originally published on The Conversation. Read the original article.

The magician snaps his fingers and a ball disappears right in front of your eyes. How is this possible, you ask yourself? You have a pretty good understanding of how objects behave and you know from experience that objects cannot simply disappear into thin air, yet this is exactly what you see. Magic is one of the oldest art forms and since written records began, magicians have baffled and amazed their audiences by creating illusions of the impossible. While most of their tricks remain precious secrets, scientists, myself among them, have started studying magic to gain insights into how and why our minds are so easily deceived.

Magic allows you to experience the impossible. It creates a conflict between the things you think can happen and the things that you experience. While some magicians would like you to believe that they possess real magical powers, the true secret behind magic lies in clever psychological techniques that exploit limitations in the way our brains work. Many of these limitations are very counter-intuitive which is why we can experience the magical wonder of the impossible.

How? Let’s start with the basics. Vision is our most trusted sense, and influences many of our thoughts and behaviours. In fact, vision is so important that we often don’t believe things until we see them with our own eyes. But it turns out that our visual experiences are far less reliable than we intuitively think. It’s relatively easy to distort your perceptual experience and these distortions become very apparent when we look at visual illusion.

Visual illusions occur when there is a mismatch between your perceptual experience and the true state of the world. In the Müller-Lyer illusion, for example, the top line appears shorter than the bottom, although they are exactly the same length.

Seeing the future

We are often surprised by how these illusions deceive us, but it turns out that pretty much all of our perception is an illusion, whether we’re walking down the street or attempting to suss the latest card trick. Intuitively, we think of our eyes as simply capturing truthful images of the world. But in reality, our visual experience results from complex neuronal processes that make clever estimates about what the world is like. And as with all predictions, they are never 100% correct. This leads to errors, and it is these errors that magicians have mastered and exploit.

For example, the vanishing ball illusion is one trick that colleagues and I have studied. In this trick, a magician throws a ball in the air a couple of times and then makes it seem to disappear by pretending to throw it again when in fact it remains secretly concealed inside his hand. What is surprising about this illusion is that most people – almost two thirds – experience an illusory ball being tossed up in the air at the third throw, even though it never leaves the magician’s hand. We experience this “ghost ball” because we see what we believe is going to happen, rather than what has actually taken place. The illusion shows that people perceive things that they believe will happen in the future, even when this belief is completely unfounded.

Ignoring the present

A further misconception about visual experience relates to the amount of detail that we think we are aware of. Intuitively we feel that we are aware of most of our surroundings, but this vivid and detailed subjective experience turns out to be another powerful illusion, equally counter-intuitive and therefore equally open to exploitation by magicians.

Processing large amounts of information is computationally expensive: if you want to process lots of visual information, you need large brains. But large brains come at a cost, since they require large heads and lots of food to support them. So instead of evolving into creatures with humongous brains, we developed extremely efficient strategies that allow us to prioritise aspects of the environment that are of importance, while ignoring things that are less relevant.

What this means is that unless you are paying close attention to something you simply won’t see it. Phenomena such as inattentional blindness or change blindness result from this, where people fail to spot very obvious changes simply because they don’t attend to them. These very powerful examples illustrate that if people are sufficiently distracted they can fail to see a gorilla even when one is right in front of their eyes.

Magicians frequently exploit these attentional limitations by misdirecting your attention and so preventing you from seeing their secret moves. In some of our research we have shown show how this can be used to prevent you from seeing fully visible events.

In the lighter trick, for example, a magician is seated at a table across from the viewer (a). He picks up the lighter and flicks it on (c–f). He pretends to take the flame away and make it vanish, providing a gaze cue as misdirection away from his other hand. At (f), the lighter is visibly dropped into his lap (g–h). The lighter appears to have vanished. Although the lighter is dropped in full view, half of the viewers completely fail to see this happen because they are distracted.

What this, and other tricks show, is that people often fail to see things even when they are looking straight at them. So don’t be so sure to trust your vision in the future. You never know what’s really happening.

Mathematical modelling in psychology and the dangers of physics envy.

Alan-3Prof. Alan Pickering has a Chair in the Department at Goldsmiths. He has researched in many different areas of psychology since the mid 1980s but in recent years his focus has been on the psychobiology of personality traits such as extraversion, anxiety, impulsivity and schizotypy. He uses formal models to capture the biological bases of these individual differences. Here he talks about the benefits – and trappings – of such an approach.


As a psychologist with a background in the natural sciences, I have always preferred accounts (“models”) of psychological phenomena which draw upon a mathematical or computational framework. There was a nice example of this approach to psychological model building in an earlier entry in the departmental blog by Caspar Addyman. (1). When so much psychological theorising is expressed using simple (and often ambiguous) qualitative verbal arguments, those who turn to mathematical psychology have sometimes been accused of “physics envy”, I would rather think of it as a preference for trying to understand things using “the language that nature speaks in”, as the Nobel Prize winning physicist, Richard Feynman, once said.

The Drift-Diffusion Model

A recent article by Ratcliff et al (2016) provides some interesting insights into how mathematical models, drawn from physics, may contribute to understanding psychological processes. This article reviewed the vast array of evidence that has accrued, over four decades, to support the so-called “drift-diffusion” model (DDM) of speeded choice. The DDM is used to account for the speed and accuracy of responses made to stimuli. It is directly derived from the physics that describes the partly random diffusion of small particles in a fluid (so-called “Brownian motion”; you might remember learning about this in high school science classes).

The DDM is based on relatively simple mathematics. The model equations track the position of a “particle” as a function of continuous time. The simplest case is to consider a particle which can move along a line, for example by moving up or down. The particle is influenced by two processes which affect the way it moves. First, it is continuously buffeted by random diffusion movements which move it upwards one moment and then downwards the next. As this process is random it does not move the particle in any particular direction: the average position of the particle just stays put, where it started. In addition, the particle has a characteristic rate of drift which is a steady movement, either up or down.

Psychologists have used the DDM model to study choice reaction times (RTs) by considering how long it might take the particle to collide with one of two barriers (or decision boundaries). The barriers are equidistant from the particle’s staring position: in our example one barrier is a fixed distance above the particle, the other the same distance below. Each barrier represents a decision point corresponding to one of the choices in a two-choice task. The “times to collision” of the particle, with these barriers, are taken (after suitable rescaling) to be the model of the RTs. More precisely, the collision times reflect the decision, or choice, part of these RTs. The time taken to encode stimuli and the time for executing the response after deciding which response to make are captured as a simple random variable and added to the choice component captured by the diffusion equations.

Drift-Diffusion and Reaction Time Studies

We will consider some basic questions about this popular and influential model: what features of choice RT data is it able to model, which model features (assumptions) are critical to the model’s ability to capture aspects of the data, and is the use of this particular physical model necessary, sensible and appropriate for capturing patterns in RT data?

It is easier to imagine this model with time divided into discrete timesteps, in which case it is strictly called a random walk model. Here I will describe a simple version in which the particle moves up or down the rungs of a ladder (see Figure 1). Imagine the ladder has 501 rungs and, on each trial of the psychological task being modelled, let us assume that the particle starts midway up the ladder at rung 251. Imagine also that each trial of the task being modelled involves seeing a picture of a face posing either a happy or fearful expression, and the participant has to identify the emotional expression on each face quickly and accurately using button-press responses. The model allows one to track the movement of the particle over timesteps on each trial. We can decide (arbitrarily) that a response of “happy” will be given whenever the particle reaches the top rung of the ladder, and “fearful” whenever it reaches the bottom. Note that the number of timesteps taken to reach one or other of these response thresholds will be used to capture the decision time on that particular trial.

The movement of the particle up and down the rungs is essentially a means for expressing the rate of information gathering in the direction of each response (upwards=happy; downwards=fearful). Imagine on each discrete step of time the particle moves either 10 rungs up the ladder or 10 rungs down, determined at random with a 50:50 chance of moving in either direction. Clearly, this just adds noise and variability to the timing of the movements, and thus to the response decisions. This random diffusion process will, on average, move the particle neither up nor down. Participants generally respond correctly and so the model needs a means for ensuring that, on happy face trials, the particle generally moves in an upward direction (and downward on fear face trials).  To do this, the model uses the drift rate. One can set the drift rate by saying, for example, that the particle will move 1 rung of the ladder on each timestep in a constant direction. A faster drift rate (in rungs moved per timestep) will give faster response times. The drift direction will be in the correct direction (on correct response trials) and in the wrong direction (on error trials).

In a task where a participant makes 95% correct responses, one can determine the drift rate, on a particular trial, by using a biassed coin with a 95% chance of landing heads. On happy face trials, when the coin comes up heads, the particle moves one rung up the ladder on every timestep. On fear face trials, a heads outcome moves the particle one rung down the ladder on every timestep. As a result the particle will gradually, yet noisily, move in the correct direction on 95% of the trials and in the wrong direction on 5% of the trials. Once can change the bias of the coin used to capture the different levels of error made in different tasks and/or by different participants. Figure 1 shows an example of two trials with correct responses.


Figure 1: The random walk of the particle on a simulated happy face trial (blue circles) and a fear face trial (black crosses). Note the decision component of the RT is modelled by the number of timesteps to reach the top or bottom of the ladder (red lines). The trial decisions completed in 111 (happy) and 145 (fear) timesteps respectively.


What can one simulate with this model? Let us start with the RT distribution for a single participant’s correct responses. The characteristic RT distribution for a single participant, on a wide range of 2-choice tasks, typically looks something like that shown in Figure 2A (i.e. the distribution is positively skewed). These real data were taken from a subject discriminating happy from fearful facial expressions (total trials=240). A simulation of 240 trials was run with a 5% error rate. For the correct responses (95% of trials), the simulated RT distribution for a single participant, was as depicted in Figure 2B. We can see from the Figure that the model does a pretty good in simulating the shape of the RT distribution for correct responses. Recall that we have to scale the decision timesteps into milliseconds and then add a time for the combined duration of stimulus encoding and response execution, in order to complete the modelling.





Figure 2. A: Real RT data (correct responses only) from a participant discriminating happy vs. fearful facial expressions. B: Simulated decision times using the ladder rung random walk version of the DDM, described above (drift rate simulated using a biased coin toss, with probability of heads=0.95).

However, the above model predicts the same RT distribution for correct and error responses. This is because the drift rate is the same (1 ladder rung per timestep) for both correct and incorrect responses; it is just the direction of the drift that changes. The typical picture in choice RT tasks is that error responses have a slower mean response that correct responses, so the assumptions of the above model must be changed in order to capture this observation. The usual way this has been done has been to allow the drift rate to vary from trial to trial. In the current ladder model this could be done by using a normal distribution to randomly determine the size of the drift on each trial. The mean of this distribution would be in the correct direction (on each timestep moving on average, say, 2 steps up for happy face trials and 2 rungs down for fear face trials). However, there would be variability across trials (e.g., a standard deviation of 1.1 rungs across trials). The value for a particular trial would be rounded to the nearest whole number of rungs. Figure 3 shows what such a distribution of drift rates would look like.

To understand why this model feature gives slow errors one should note that the (blue) happy face trials with negative drift rates in Figure 3 will lead to errors. By contrast, the majority of the happy face trials have positive drift rates and will lead to correct responses. The mean drift rate on the correct trials (with positive drift rates) is 2.26 rungs per timestep for the blue distribution shown in Figure 3, and the mean of the negative drift rates (error trials) is -0.29. Thus, the correct trials drift faster than the error trials and so the time for the correct trials to reach the top rung of the ladder (the decision criterion) is quicker than that for the incorrect trials to reach the bottom rung of the ladder. The same arguments apply equally to the (red) fear face trials in Figure 3.



Figure 3. Variability in drift rate over trials. There are 200 happy face trials (blue) and 200 fear face trials (red). Positive drift rates indicate rungs moved up the ladder on each timestep, negative drift rates indicate rungs moved down.

However, on some tasks (such as the face emotion recognition task in my lab, the errors are consistently faster than the correct responses. This tends to happen when tasks are easy and participants are encouraged to respond rapidly. If one were to try to model this using the model we have developed so far, then one would need a bimodal distribution of drift rates for each type of stimulus. For happy face trials there would need to be a mode with positive drift rates and a mode with negative drift rates. The drift rates values in the positive mode, on average, would need to have smaller absolute values than the values in the negative mode. In this way the error trials would drift to their decision criterion (the bottom rung, for happy faces) faster than the correct trials drift towards the top rung of the ladder. This type of assumption in the model doesn’t seem very appealing to me, as it merely captures the effect without offering any explanation for what is different between a task with slow errors and the rarer task with fast errors. Why should the drift rate distributions differ?

DDM modellers capture slow errors by changing another modelling assumption. Note that so far, the ladder model has the particle always starting midway between the top and bottom rungs. In the full DDM the start position is also allowed to vary between trials, and introducing this feature into the model captures slow errors. If the start point is nearer the correct decision boundary (e.g., nearer the top rung for a happy face trial) then there won’t be many errors on these trials and they will be slow as there is a long way to drift. However, if the start point is nearer to the wrong decision boundary then there will be relatively more errors arising on these trials and they will be fast as the (wrong) boundary is relatively close. With drift rates and start points varying between trials, modellers can get the model to switch between fast and slow errors simply by closing (or widening) the gap between the decision boundaries (varying the number of rungs on our ladder).

Recently, it has been argued that these across trial variability assumptions (in drift rate, and in particle starting position) used by the DDM, make it so flexible that it can accommodate almost any result, rendering it unfalsifiable (Jones et al., 2014). In the same issue of Psychological Review, you can read a defence of the falsifiability of the DDM and related models (Smith et al., 2014 and Heathcote et al., 2014), and the response to this defence. It’s an entertaining piece of intellectual ping pong!

A final observation came from a mathematician I know who studies the maths of branching processes like those underlying the DDM. She said:

the underlying physics is completely irrelevant to the model; you just need to know the distributions of the time-to-collision with the barriers, these distributions need have nothing to do with diffusion and drift”.

We have seen above that these distributions (of drift rate, which determine the times to collision) are a key to how the model captures the data. She further argued that to be a “real model” there would need to be something in the physics of the brain mechanisms underlying the choices made which was close to real physical diffusion processes. Interestingly, recent work has tried to fill that gap by showing that neural firing rates behave in similar ways to the physics of diffusion as captured by the DDM (see Ratcliff et al.,2016, Box 3).

The (Failed) Physics of Positivity

I think the above shows that the importance of the underlying physical model for the predictions based on the DDM is (at best) arguable. From time to time, however, psychologists have clearly overstepped the mark in using physics to develop their mathematical models of behaviour. In a series of papers, culminating in a paper in the prestigious journal, American Psychologist, Fredrickson and Losada (2005) claimed to have used non-linear dynamical equations describing the physics of chaos to uncover an emotional “positivity ratio”, which they claimed had important properties for people’s wellbeing. Their positivity ratio is the number of positive emotions exhibited by a person (or couple, or group of people) divided by the number of negative emotions they exhibited. Friedrickson and Losada claimed that the mathematical model predicted that individuals “flourish” when this ratio exceeds 2.9 (but is below 11.6). The concept represented by the non-linear equations was that when the positivity ratio was below 2.9 the individual is in one state (non-flourishing; imagine this being akin to ice) and that as the positivity ratio reaches the “tipping point value” of 2.9, their full potential is suddenly released (akin to them transitioning suddenly to another state, like liquid water).

The claims in the 2005 paper, at surface value, sound important and the underlying mathematical model and equations must have appeared impressive. These factors seem likely to have been part of the reason why this paper had received 322 citations between its publication and April 2013. However, the validity of the use of this mathematical model severely troubled a part-time masters student from the University of East London, Nick Brown, who was studying the 2005 paper as part of a module on “positive psychology”. He contacted an academic called Alan Sokal whom he felt might be able to help him write a paper criticising the earlier work on the positivity ratio. They subsequently published their critique in the same journal (Brown et al., 2013). The abstract of their critique perfectly summarises both the issues with this particular piece of pseudo-science and some general guidance on the use of mathematical modelling tools in psychology:

“We examine critically the claims made by Fredrickson and Losada (2005) concerning the construct known as the “positivity ratio.” We find no theoretical or empirical justification for the use of differential equations drawn from fluid dynamics, a subfield of physics, to describe changes in human emotions over time; furthermore, we demonstrate that the purported application of these equations contains numerous fundamental conceptual and mathematical errors. The lack of relevance of these equations and their incorrect application lead us to conclude that Fredrickson and Losada’s claim to have demonstrated the existence of a critical minimum positivity ratio of 2.9013 is entirely unfounded. More generally, we urge future researchers to exercise caution in the use of advanced mathematical tools, such as nonlinear dynamics, and in particular to verify that the elementary conditions for their valid application have been met.” (Brown et al., 2013, p. 801)

A full treatment of this bizarre case is given in a 2015 lecture by Alan Sokal. His devastating critique of the published “positivity ratio” papers is an entertaining read, by turns jaw-dropping and hilarious, despite being a sad indictment of the academic peer reviewing process that allowed the 2005 paper to be published in the first place. This point was put rather punchily in slide 228 of his lecture: “How could such a loony paper have passed muster with reviewers at the most prestigious American Journal of Psychology?” How indeed!

Prof. Alan Pickering reacts to tweets according to an as yet undermined mathematical model  @ad_pickering.

Time in Mind: How your brain tells the time


AddymanCasparDr. Caspar Addyman  is a Lecturer in the Department. He is a developmental psychologist interested in learning, laughter and behaviour change. The majority of his research is with babies. He has investigated how we acquire our first concepts, the statistical processes that help us get started with learning language and where our sense of time comes from.

Here, he looks at the last of these: how our brain tells the time.

It is a defining feature of the modern world that we all seem to be short of time, all the time.  We are poor at managing our time. We are poor at estimating time. And until recently psychologists have been poor at explaining why. I believe my research on how the brain represents short intervals might give us a few clues. The short answer that we’re bad because our brains don’t contain clocks. Instead we must guesstimate the passage of time based on how our memories fade.

This might not sound too radical to you but it goes against received wisdom. For 50 years the main explanation of how you judge intervals has been based a little stopwatch in your head. This is known as the pacemaker-accumulator model, as it involves something that ticks (the pacemaker) and something that counts the ticks (the accumulator). This is a fairly intuitive idea but I would argue it is completely wrong.

It’s wrong for three main reasons. The biggest problem is that if we had an internal clock we would be a lot better at judging time than we are. Secondly, the clock model can’t explain how we judge the time in retrospect. Thirdly, it can’t easily explain why time flies when you’re having fun.

My memory based model of interval time solves all these problems. Developed with colleagues at Birkbeck and Burgundy and called the Gaussian Activation Model of Interval Timing (GAMIT, French, Addyman, Mareschal & Thomas, 2014) it is much simpler than the name suggests. The key idea is that you estimate time passing by how your memories fade. The more time has passed the fuzzier they are. The more things that happen to you the faster they will fade and the faster you will feel time is passing.


You have no clock

Humans and rats are terrible at telling the time. Get us to press a lever when 10 seconds have passed we will generally do so somewhere between 8 and 12 seconds. Correct on average but with quite a bit of variation and humans are no better than rats. If the interval is increased to 20 seconds our estimates are just as bad with estimates spread out between 16 to 24 seconds. If the interval is twice as long you (and your pet rat) are twice as bad.


This is where the real problem lies. If you had some sort of clock in your head then random unreliability in your clock would average out the longer you ran it. Your errors should be proportionally smaller on longer intervals. Pacemaker-accumulator models normally get round this by saying that as numbers get larger, counting gets harder. This is a kludge. In our model the errors are impossible to get around, as memories fade uncertainty increases, bad estimates are as good as it gets.    


Timing, all the time.

Mental clocks have a second problem. What to time? Because it seems like we judge the time of any event we remember or can locate in the sequence of our experiences  How long ago did you start reading this article, this paragraph? When did the waiter leave our table? When did that blue square appear on the screen in this experiment I am in? If mental timing is done with a clock then either you need to start a separate timer for every single event or master clock labels everything. The former would be highly wasteful while the latter would complex and couldn’t easily account for errors we mentioned above.

With a memory model all this comes for free. When we access our memory of a past event, there is more uncertainty the longer ago it was. The instant the waiter leaves the memory of that event is clear with each passing moment details become less clear. What was he wearing? Was I leaning forward or backward? What did you just say? Our brains are used to dealing with uncertainty, in our view, timing is just another example.


TIme flies when you are having fun

Imagine you are about to give a five minute interview on live television. You are off camera waiting your turn and there is nothing for you to do but focus on the passing time. Five minutes feels like forever. Then it is your turn and suddenly everything is happening at once. You get to the end in no time and are surprised it is over so soon. But then looking back on it, the pattern is reversed. You remember little about the waiting but the interview is full of event. If you didn’t know otherwise you would swear the interview was longer than the wait.

This difference between these so called prospective and retrospective timing judgements has been confirmed in a large meta-analysis of 117 studies (Block, Hancock & Zakay, 2010). It is so striking that most researchers say it means there must be two independent timing systems. This seemed ridiculous to us and our model was largely developed to unify these two effects.

In our view, judging the passing of time is a combination of two things; how much is happening and how much attention we are paying to the passage of time. Our model quantifies how these two factors interact to create distortions of time. The more that happens the faster your memories fade making recent events feel further in the past.  Yet with more happening the less attention you can give to the passing of time and it feels like things are happening faster.


Time will tell

We are still developing our model and it’s not only the game in town. In a recent review of the field we found over 20 different approaches to how we judge short intervals (Addyman, French & Thomas, 2016). And then there are whole other classes of models that look at very fast timing under a second or on the order of days. In the realm of our daily experience from seconds to minutes we believe that our model is the most elegant and intuitive. But the real test is how well it fits the data. And that’s what we are working on now.

Dr. Caspar Addyman  tweets in real-time at @brainstraining




One rather nice thing about this research project was that I experienced a genuine “Eureka moment”, a flash of insight where I unexpectedly solved a big problem.

Derived from the story of ancient Greek scientist Archimedes leaping from his bath, shouting Heurika (I have it) when he realised how to measure the volume of an irregular shaped body by immersing it in water. No-one knows if that story is true. But it has become a common stereotype of the scientific process that it proceeds through giant leaps of insight or discovery. Mostly, it doesn’t. Science, like life, is mostly hard work with the occasional bit of good luck. Ninety-nine percent perspiration and 1 percent inspiration as Thomas Edison was fond of saying.

My lucky moment came one morning on the Victoria line somewhat north of Pimlico. For several months I had been struggling with how build effect of attention into our original computer model of fading memory. I had no clear ideas but was supposed to have a meeting that lunchtime explaining my progress. Sleepy and forlorn I just stared out of the window. Whereupon the answer just popped into my head; adding a loop to our network would let it look at its own previous estimates. When more events were competing for attention those loops would happen less frequently.

I knew straight away it would work and went happily to my meeting. By the end of the day, I had computer model that did as I expected within a week the paper was written (Addyman & Mareschal, 2014) Needless to say, in a few hundred tube journeys since then, it hasn’t happened again. Maybe Edison was exaggerating.


This article is published with a Creative Commons Attribution NoDerivatives licence [http://creativecommons.org/licenses/by-nd/4.0/], so you can republish  it for free providing you link to this original copy.

Paper Review: Rewards can make time last longer


Dr. Devin B. Terhune is a Lecturer in the Department of Psychology at Goldsmiths, University of London. He applies a range of methods to different facets of consciousness, with a focus on time perception and hypnosis. Here he tells us how it could be that rewards can make time seem to last longer in a review of a paper by Failing and Theeuwes (2016). 

Failing, M. & J Theeuwes, J. (2016) Reward alters the perception of time. Cognition 148, 19-26

Everyone enjoys a reward, from a kind remark made in passing to a formal award. In turn, rewards significantly affect behaviour by providing a source of motivation. There is consistent evidence that stimuli associated with reward trigger a transient release of the neurochemical dopamine, which influences the salience of a stimulus, drawing attention to it. How this attentional bias toward stimuli associated with reward impacts lower level perception, however, is less clear.


In their newly published study, Failing and Theeuwes, researchers at Vrije University in Amsterdam, investigated how reward influences time perception. Our perception of time is a fundamental feature of conscious experience that helps to shape our identity and it is necessary for a diverse array of motor and psychological functions from decision making to playing a musical instrument. It fluctuates from moment-to-moment, is influenced by a range of environmental factors, and is altered in a number of clinical conditions including Parkinson’s disease and schizophrenia.

There is good reason to believe that reward and timing are linked. For instance, both reward processing and timing depend on dopaminergic pathways and frontal-striatal circuitry. Previous studies have shown links between reward and timing but nearly all of these have been in non-human animals and in intervals over a few seconds. Can reward impact our perception of stimuli that only last a few hundred milliseconds?

In this study the researchers relied on previous studies linking attention and time perception. It is well established that the more we attend to a stimulus, the longer we perceive it to last. This is why, for instance, time seems to last longer when we are afraid. To test the hypothesis that stimuli associated with reward would be perceived as lasting longer because of attentional bias, the researchers used an interesting temporal illusion known as the oddball effect. This is when time seems to last longer when we’re in the company of strange people. Actually, it’s when an unexpected or deviant stimulus is perceived as lasting longer. For example, if I show you a sequence of black circles with a single red circle embedded randomly in the middle, you’ll tend to overestimate the duration of the red (oddball) circle. Not everyone exhibits the oddball effect, but it is quite robust.

First Experiment

In the first experiment, the researchers varied the colour (red or blue) of the oddball from one trial to the next. One colour indicated a reward trial whereas the other indicated a control (no reward) trial.

A train of seven standard (black) stimuli was presented with the oddball always randomly embedded at the 5th, 6th, or 7th position. Standards were presented for 500ms whereas oddballs varied from 350 to 650ms. Participants were given feedback after each trial and rewarded with points for correct responses and penalized for incorrect responses. These points influenced how much they were compensated at the end of the experiment. The researchers observed that oddballs were perceived as lasting longer on reward trials than control trials, supporting their central prediction.

Second Experiment

One potential question arises though. Is the effect only present when the reward is directly tied to the oddball stimulus? For instance, will it still be present if the reward is associated with the broader sequence of stimuli (i.e., the standards)? The authors investigated this question in a second experiment by changing the colour of the standards between reward and control trials (the oddball colour always remained black). Interestingly, they failed to replicate their finding of temporal dilation on reward trials: reward and control trials did not differ in perceived duration of the oddball. However, temporal precision increased on reward trials. This makes sense. From the beginning of the trial participants knew that it was a reward trial and so they most likely attended more closely, resulting in superior discrimination. Interestingly, in both experiments, the perceived duration of oddballs in all conditions did not differ from the duration of the standards. In other words, the researchers did not replicate the classic oddball effect.

Third Experiment

A further issue is whether the observed temporal dilation in the first experiment can be attributed to reward for correct responses or the threat of penalty for incorrect responses. In addition, how might trial-by-trial feedback affect performance? It is possible, for instance, that trial-by-trial performance monitoring influenced task performance resulting in a reduction or cancellation of the classic oddball effect. To address these questions, in a final experiment, the researchers removed the penalty for errors, gave summary feedback after each block, rather than each trial, and included both low and high reward levels. Importantly, they replicated the original result with participants overestimating the duration of oddballs on high reward trials relative to low reward trials and again found no differences in temporal precision across trials. In addition, the perceived duration of both oddballs was longer than the standards, thereby replicating the classic oddball effect.

Thus, the researchers were able to reliably show that a stimulus associated with reward is perceived as lasting longer than one that is associated with either no reward or a lower reward. This effect was not present when reward is signaled by the broader trial sequence – only the oddball itself. These results are consistent with the hypothesis that a reward-signaling stimulus is highly salient and draws greater attention, resulting in temporal dilation. Moreover, this dilation effect is broadly consistent with a number of models of timing, such as the striatal-beat-frequency model. According to the latter, dopamine release associated with reward may jump-start timing in striatum, resulting in dilation.

Where next?

This finding has numerous implications for a range of psychological phenomena from stimulus dependence to gambling. It further raises questions about what impact temporal dilation of reward-associated stimuli might have. However, the magnitude of temporal dilation was only 18ms in Experiment 1 and 6ms in Experiment 3. Accordingly, it remains to be seen how much of an impact such effects will have on behaviour. Nevertheless, this study provides a compelling demonstration of how reward can alter a fundamental feature of conscious awareness.

For those who want to know more about this area, there is an upcoming special issue of the journal Current Opinion in Behavioral Sciences devoted to time perception.