Show simple item record

dc.contributor.authorGrieben, Raul
dc.contributor.authorTekülve, Jan
dc.contributor.authorZibner, Stephan K. U.
dc.contributor.authorLins, Jonas
dc.contributor.authorSchneegans, Sebastian
dc.contributor.authorSchöner, Gregor
dc.date.accessioned2020-07-10T15:07:16Z
dc.date.available2020-07-10T15:07:16Z
dc.date.issued2020-02-11
dc.identifier.citationAttention, Perception, & Psychophysics, volume 82, issue 2, page 775-798
dc.identifier.issn1943-3921
dc.identifier.others13414-019-01898-y
dc.identifier.other1898
dc.identifier.urihttps://www.repository.cam.ac.uk/handle/1810/307842
dc.description.abstractAbstract: Any object-oriented action requires that the object be first brought into the attentional foreground, often through visual search. Outside the laboratory, this would always take place in the presence of a scene representation acquired from ongoing visual exploration. The interaction of scene memory with visual search is still not completely understood. Feature integration theory (FIT) has shaped both research on visual search, emphasizing the scaling of search times with set size when searches entail feature conjunctions, and research on visual working memory through the change detection paradigm. Despite its neural motivation, there is no consistently neural process account of FIT in both its dimensions. We propose such an account that integrates (1) visual exploration and the building of scene memory, (2) the attentional detection of visual transients and the extraction of search cues, and (3) visual search itself. The model uses dynamic field theory in which networks of neural dynamic populations supporting stable activation states are coupled to generate sequences of processing steps. The neural architecture accounts for basic findings in visual search and proposes a concrete mechanism for the integration of working memory into the search process. In a behavioral experiment, we address the long-standing question of whether both the overall speed and the efficiency of visual search can be improved by scene memory. We find both effects and provide model fits of the behavioral results. In a second experiment, we show that the increase in efficiency is fragile, and trace that fragility to the resetting of spatial working memory.
dc.languageen
dc.publisherSpringer US
dc.subject40 Years of Feature Integration: Special Issue in Memory of Anne Treisman
dc.subjectVisual search
dc.subjectVisual working memory
dc.subjectNeural network modeling
dc.titleScene memory and spatial inhibition in visual search
dc.typeArticle
dc.date.updated2020-07-10T15:07:16Z
dc.identifier.doi10.17863/CAM.54937
rioxxterms.versionofrecord10.3758/s13414-019-01898-y
rioxxterms.versionVoR
rioxxterms.licenseref.urihttps://creativecommons.org/licenses/by/4.0/
dc.contributor.orcidGrieben, Raul [0000-0003-1718-7679]
dc.identifier.eissn1943-393X


Files in this item

Thumbnail
Thumbnail

This item appears in the following Collection(s)

Show simple item record