Nina Flad

Alumni of the Department Human Perception, Cognition and Action
Alumni of the Group Cognition and Control in Human-Machine Systems

Main Focus

Research Group:
Supervised by:
I am interested in how humans seek out and process visual information from different sources during manual steering. For example, controlling a car for its heading and speed requires us to scan our environment constantly by moving our eyes; we have to monitor our distance to the car in front of us as well as the road curvature and peripheral road-signs. I am further interested in understanding how the characteristics of these sources, such as their updating frequency and spatial content, influence visual scanning behavior.
For this purpose, I record EEG and eye-movement behavior during visual scanning tasks. Eye-movements can inform us about the user’s gaze behavior, that is, what the user is looking at and, thus, what information he tries to receive from his surroundings. EEG can give insights into cognitive “properties” such as attention and workload, namely the extent to which the information that is fixated on is perceived and processed.

Mobile eye-trackers offers the possibility to study eye-movements in real world environments. At the same time, mobile EEG systems promise the record cortical signals robustly in such a scenario. However, taking these two methods together is not straightforward, since not much is known about perception in real world environments. Established approaches that work under laboratory conditions do not necessarily also work in more complex situations. Currently, there are two established approaches for studying perceptual processes.
First, in traditional ERP (event-related potentials) research to study visual perception, participants have to avoid moving their eyes to prevent potential artifacts. The stimulus can be presented either at fixation or in the visual periphery. In such studies, perceptual processing is assumed to start at stimulus onset. However, this paradigm is a poor approximation to natural gaze behavior, which is characterized by eye-movements.
Second, with modern processing techniques, unconstrained eye-movements during EEG (electroencephalography) recordings became possible. This gave rise to FRP (fixation-related potentials) research that is based on the assumption that perceptual processing starts with fixation onset on a stimulus. While this assumption might be true in well-controlled laboratory studies, it does not hold for real life situations where peripheral vision is available and leads to (partial) processing of stimuli that are not fixated.
To summarize, neither ERPs nor FRPs are fully suited to study perception in situations where perception onset is unknown. This means, studying perceptual processes under natural conditions remains a challenge.

What is the time course of perceptual processing in a real-world scenario that includes peripheral information on stimuli that are yet to be fixated? If neither ERPs nor FRPs on their own should be applied in such a situation, can these methods be combined to gain insight into visual processing?

In general, I use EEG in combination with eye-tracking (EOG, electrooculography) [1,2] to determine cortical activities prior to and during stimulus fixation. ICA (independent component analysis) is used for artifact correction.  
In my current study, I am using stimuli that lead to different performances in a peripheral identification task (low = 60%, medium = 75% or high = 90% correct, with 50% being chance level). This way, I have stimuli with a known or unknown onset of perceptual processing and I can compare the EEG waveforms between the conditions.

  • High performance indicates processing onset on stimulus onset, because in this case a saccade to the stimulus is not necessary for stimulus identification. These stimuli evoke an ERP [3].
  • Low performance indicates processing onset on fixation onset, because these stimuli require a fixation for reliable identification. Such stimuli evoke an FRP.
  • The perceptual onset for stimuli with medium performance is unknown. There is some peripheral processing occurring before the saccade, but it is not sufficient to identify the stimulus. This is the interesting case, because it is most likely to happen in a natural environment. At the same time, the resulting waveform in the brain is unknown.

Initial results
It seems that a stimulus in peripheral view, which does not require an eye-movement for identification, evokes a cortical response at stimulus onset and not at stimulus fixation, even though the participants move their eyes to fixate it. [3]

Initial conclusion
I currently assume that a stimulus will evoke a brain response at onset (ERP) as well as fixation (FRP). Depending on how easy the stimulus is to identify prior to a saccade, these two responses add up with different proportions.


[1] Flad N, Bülthoff HH and Chuang LL (August-2015) Combined use of eye-tracking and EEG to understand visual information processing, International Summer School on Visual Computing (VCSS 2015), Fraunhofer Verlag, Stuttgart, Germany, 115-124.

[2] Flad N, Fomina T, Bülthoff HH and Chuang LL (2017) Unsupervised clustering of EOG as a viable substitute for optical eye-tracking In: Eye Tracking and Visualization: Foundations, Techniques, and Applications: ETVIS 2015, , First Workshop on Eye Tracking and Visualization (ETVIS 2015), Springer, Cham, Switzerland, 151-167, Series: Mathematics and Visualization.

[3] Flad N (October-7-2016): When does the Brain Respond to Information during Visual Scanning?, 1st Neuroergonomics Conference: The Brain at Work and in Everyday Life, Paris, France.

Curriculum Vitae

Nina is a PhD student at the Max Planck Institute for Biological Cybernetics. She has a Master’s degree in Neural Information Processing (Graduate Training Centre of Neuroscience, Tübingen) and a Bachelor's degree in Bioinformatics.

Go to Editor View