Research Overview




We are interested in how quickly people are able to grasp the scene they are viewing when that scene is presented very quickly. Manipulating the scene properties, we are able to investigate which visual factors are important in determining a scene's gist. We use the Contextual Bias Paradigm (below) as a way of getting at whether a scene is "understood" without having to ask participants for the name of a scene or to verify the scene name.



By examining people's eye movements as they look around a scene, we can investigate what information is prioritized for further visual processing.

Of course, the information prioritized has a lot to do with what the task is, among other things. So by manipulating task, as well as other factors, we can have a better understanding of gaze control mechanisms, attention and their interaction with memory.


Scene Gist and Gaze Control

By combining our interest in the fast processing of scenes with our interest in gaze control, we are also investigating how the first glance at a scene influences later processing on it. We use the Flash Preview-Moving Winfow paradigm to examine these types of questions about the interaction between scene gist and eye movements.

chair search.jpg

Extraction of Spatial Layout in Scene Perception

One important aspect of understanding a scene is understanding its spatial layout. We are interested in how our visual system represents the space of a scene. We use computer-generated images to get at questions of how different viewpoints of a scene are processed and integrated



In addition to studying visual processes in scenes, we are intersted in deciphering the type of information (in scenes potentially) that can influence the deployment of attention. Visual search tasks allow us to manipulate various factors outside of any scene biases that naturally influence processing while viewing pictures. Combining these approaches we can get a better understanding of visual processing in general.



Much of the research on eye movement control and its relation to on-going cognitive processes have been done in reading. Reading offers a very structured visual input that always has a very clear task (to comprehend the text). As a result, this research offers a number of insights into the architecture and different processes of the visual system.



In the Queen’s Visual Cognition Laboratory (QVCL), we are interested in how people perceive and process the visual information in their immediate environment (or real-world scenes).  We are interested in how people are able to quickly understand the space they are looking at (getting the “scene gist”), how and when people attend to different aspects of the environment, and what type of information is remembered when no longer in that environment.  We use behavioural and eye movement experiments, where we change different aspects of the images and task to examine these questions.

Scene Categorization

We examine how cognitive processes work theoretically by disrupting those processes and examining how long it takes to recover. For example, in a series of studies, we examine how scenes that contain more than one possible interpretation (depending on where you are looking) are processed over time and with different tasks (e.g., Castelhano, Therauilt, Fernandes, 2018).  Using the slider, you can see what each of the scenes look like when they are normal and when they contain two contradictory categories.


Visual Search

One topic we have explored is how information from objects and scenes are used to search extremely efficiently.  Above is example stimuli from one of our studies (Pereira & Castelhano, 2014), where we manipulated the information that is outside of a moving window.  This window is normally tied to where a person is looking, but can be viewed here using your mouse.

Can you find these objects in the image below?

1) a lamp 2) a glass 3) a magazine basket 


Object Function and Action

We’re also interested in how objects are organized in scenes, and their link to actions and object functions (Castelhano & Witherspoon, 2016).  We are testing this using a number of invented objects that don’t exist in the real-world. Using these objects, we can control the type and extent of knowledge people have about the objects and see how this information impacts assumptions about the objects, and in turn, how these assumptions affect attentional control during search.