In a recent article published by the QVCL team (Dr. Monica Castelhano and Effie Pereira), a study revealed that preview information and scene context both independently add to one's parafoveal processing of objects, regardless of object-scene congruency interactions.
Read more about the exciting research coming from our lab here:
Many studies in reading have shown the enhancing effect of context on the processing of a word before it is directly fixated (parafoveal processing of words). Here, we examined whether scene context influences the parafoveal processing of objects and enhances the extraction of object information. Using a modified boundary paradigm called the Dot-Boundary paradigm, participants fixated on a suddenly onsetting cue before the preview object would onset 4° away. The preview object could be identical to the target, visually similar, visually dissimilar or a control (black rectangle). The preview changed to the target object once a saccade toward the object was made. Critically, the objects were presented on either a consistent or an inconsistent scene background. Results revealed that there was a greater processing benefit for consistent than inconsistent scene backgrounds and that identical and visually similar previews produced greater processing benefits than other previews. In the second experiment, we added an additional context condition in which the target location was inconsistent, but the scene semantics remained consistent. We found that changing the location of the target object disrupted the processing benefit derived from the consistent context. Most importantly, across both experiments, the effect of preview was not enhanced by scene context. Thus, preview information and scene context appear to independently boost the parafoveal processing of objects without any interaction from object–scene congruency.
Find the article here.