A brief history of eye movements, or why NLP sucks
Psychology's having a bit of a hard time of it at the minute. Social Psychology is reeling from high-profile resignations and paper retractions by Stapel, Smeesters and Sanna (we'll be moving onto the 'T's soon). There are worries about how you pick the right sort of analysis in neuroimaging (and what 'right sort of analysis' even means). Jonah Lehrer's been caught being a bit too creative with some of the quotes in his recent book 'Imagine'. Claims that eye movement patterns can tell you whether someone is lying received the smackdown from Richard Wiseman and colleagues. It's all a bit doom and gloom, really.
It was that last one, about eye movements, that I'm finding irritating at the moment. Don't get me wrong, I'm not a fan or proponent of neurolinguistic programming - in fact, much the opposite. I'm really annoyed that they've taken something that's really cool and sullied it with guff. You don't need to start making crappy claims about lie detection to make eye movement research seem awesome. You just need to look at the history of it.
The modern story of eye movement research* begins in the 18th century with William Porterfield (although work can be traced as far back as the 11th century with Optics by scientist and polymath Ibn al-Haytham). Porterfield's essays from this time described eye movements in relation to reading, but drew on his conscious experience of actually feeling the eye movement in order to make assertions about them. More systematic studies were later carried out by William Wells in the late 1700s and early 1800s. By some accounts, Wells was fairly dubious about Porterfield's own subjective descriptions, and instead experimented with spinning people around and seeing what happened to their eyes when they stopped. Possibly after getting inspiration from Alton Towers, some of the rides there look pretty old. Possibly. Joking aside though, Wells' technique was pretty ingenious. Wade et al. (2003) provide some good details on it, but in a nutshell, it involved staring at a candle flame to generate an afterimage of the flame on the retina (at the back of the eye). This meant that he had a stable point of reference for the eye. After spinning around until he was dizzy, Wells would stop, and stare at a large piece of paper with a mark on it. What he saw was a cycle of movements in which the flame afterimage moved slowly away from the mark, and then suddenly snap back to it - a movement that we now call 'Optokinetic Nystagmus', which you can induce in yourself by trying to watch something out of the window of a fast-moving train. Or by making yourself dizzy, whichever you prefer.
Cut to 1891, and scientists up the ante by starting to develop recording techniques that involve the use of direct attachments to the eye. Like Wells, Ewal Hering** used afterimage techniques in his work - but with the novel addition of attaching rubber tubes to the eyelids so that he could hear what the ocular muscles were doing. Others, like Louis-Emile Javal (who is crediting with coining the term 'saccade' to describe the frequent, jerky eye movements that we made) and August Ahrens, experimented with directly attaching things to the eyes themselves. Ahrens developed an ivory cup with bristle attached to it, which was then placed directly on the cornea. Saccades were then registered by the movement of the bristle against a revolving smoked drum. Suddenly the rotating drum seems a little bit more attractive.
This all seems archaic, but modern eye movement recording techniques find their basis in these sorts of developments. The scleral search coil technique, developed by David Robinson in 1963, hailed the advent of faster, more accurate, higher resolution, and really more terrifying eye movement measurements (just do a google image search for 'scleral search coil technique' and you'll see what I mean). Here, a flexible contact lens containing wire coils is attached to the eye. The participant sits inside a set of larger coils which generate alternating magnetic fields, and that induces a voltage in the lens coils. This effectively turns the coils into directional antennae, with the magnitude of the signals indicating the orientation of the eye.
At around the same time, the Russian vision scientist Alfred Yarbus was experimenting with a both full-fitting contact lenses, and ones that were stabilised on the eye using a suction cup, that had mirrors attached to them. In one of the classic studies in vision, Yarbus got participants to look at a painting, Repin's 'An Unexpected Visitor' (above) after being given different sorts of instructions (for example 'give the ages of the people' or 'estimate how long the visitor had been away from the family'). By reproducing the sequence of eye movements that his participants made under each of these conditions, Yarbus was able to show that high level contextual factors could override lower-level attention that was guided by simple stimulus features, and that faces are really important (check out Wade, 2010 and Tatler et al, 2010, for more info and a diagram of the suction caps):
Thankfully, modern eye movement research isn't so painful. Video-based eye trackers are now in widespread use, and build upon early work by people like Jan Evangelista Purkinje - infra-red light sources are directed at the eyes, and corneal reflections in relation to the centre of the pupil let you know where someone's eye is pointing.