TY - JOUR
T1 - Modelling Task-Dependent Eye Guidance to Objects in Pictures
AU - Clavelli, Antonio
AU - Karatzas, Dimosthenis
AU - Lladós, Josep
AU - Ferraro, Mario
AU - Boccignone, Giuseppe
PY - 2014/1/1
Y1 - 2014/1/1
N2 - We introduce a model of attentional eye guidance based on the rationale that the deployment of gaze is to be considered in the context of a general action-perception loop relying on two strictly intertwined processes: sensory processing, depending on current gaze position, identifies sources of information that are most valuable under the given task; motor processing links such information with the oculomotor act by sampling the next gaze position and thus performing the gaze shift. In such a framework, the choice of where to look next is task-dependent and oriented to classes of objects embedded within pictures of complex scenes. The dependence on task is taken into account by exploiting the value and the payoff of gazing at certain image patches or proto-objects that provide a sparse representation of the scene objects. The different levels of the action-perception loop are represented in probabilistic form and eventually give rise to a stochastic process that generates the gaze sequence. This way the model also accounts for statistical properties of gaze shifts such as individual scan path variability. Results of the simulations are compared either with experimental data derived from publicly available datasets and from our own experiments. © 2014 Springer Science+Business Media New York.
AB - We introduce a model of attentional eye guidance based on the rationale that the deployment of gaze is to be considered in the context of a general action-perception loop relying on two strictly intertwined processes: sensory processing, depending on current gaze position, identifies sources of information that are most valuable under the given task; motor processing links such information with the oculomotor act by sampling the next gaze position and thus performing the gaze shift. In such a framework, the choice of where to look next is task-dependent and oriented to classes of objects embedded within pictures of complex scenes. The dependence on task is taken into account by exploiting the value and the payoff of gazing at certain image patches or proto-objects that provide a sparse representation of the scene objects. The different levels of the action-perception loop are represented in probabilistic form and eventually give rise to a stochastic process that generates the gaze sequence. This way the model also accounts for statistical properties of gaze shifts such as individual scan path variability. Results of the simulations are compared either with experimental data derived from publicly available datasets and from our own experiments. © 2014 Springer Science+Business Media New York.
KW - Gaze guidance
KW - Payoff
KW - Stochastic fixation prediction
KW - Value
KW - Visual attention
UR - https://www.scopus.com/pages/publications/84906950781
U2 - 10.1007/s12559-014-9262-3
DO - 10.1007/s12559-014-9262-3
M3 - Article
SN - 1866-9956
VL - 6
SP - 558
EP - 584
JO - Cognitive Computation
JF - Cognitive Computation
IS - 3
ER -