Published on Tue Mar 30 2021

Identification of Target Objects from Gaze Behavior during a Virtual Navigation Task

Enders, L. R., Smith, R. J., Gordon, S. M., Ries, A. J., Touryan, J.

Study looked at eye movement behavior while participants navigated through a complex virtual environment. Participants completed a visual search task where they were asked to find and count occurrence of specific targets. Results show a significant relationship between gaze behavior and target objects across subjects.

2
0
0
Abstract

Eye tracking has been an essential tool within the vision science community for many years. However, the majority of studies involving eye-tracking technology employ a relatively passive approach through the use of static imagery, prescribed motion, or video stimuli. This is in contrast to our everyday interaction with the natural world where we navigate our environment while actively seeking and using task-relevant visual information. For this reason, vision researchers are beginning to use virtual environment platforms, which offer interactive, realistic visual environments while maintaining a substantial level of experimental control. Here, we recorded eye movement behavior while participants freely navigated through a complex virtual environment. Within this environment, participants completed a visual search task where they were asked to find and count occurrence of specific targets among numerous distractor items. We assigned each participant into one of four target groups: Humvees, motorcycles, aircraft, or furniture. Our results show a significant relationship between gaze behavior and target objects across subject groups. Specifically, we see an increased number of fixations and increase dwell time on target relative to distractor objects. In addition, we included a divided attention task to investigate how search patterns changed with the addition of a secondary task. With increased cognitive load, subjects slowed their speed, decreased gaze on objects, and increased the number of objects scanned in the environment. Overall, our results confirm previous findings from more controlled laboratory settings and demonstrate that complex virtual environments can be used for active visual search experimentation, maintaining a high level of precision in the quantification of gaze information and visual attention. This study contributes to our understanding of how individuals search for information in a naturalistic virtual environment. Likewise, our paradigm provides an intriguing look into the heterogeneity of individual behaviors when completing an un-timed visual search task while actively navigating.