Conjoining auditory and visual features during high-rate serial presentation: Processing and conjoining two features can be faster than processing one |
| |
Authors: | David L. Woods Claude Alain Keith H. Ogawa |
| |
Affiliation: | 1. Davis, Neurology Service (612/127), Northern California System of Clinics, University of California, 150 Muir Road, 94553, Martinez, CA 2. Rotman Research Institute of the Baycrest Center, Toronto, Ontario, Canada 3. University of Toronto, North York, Ontario, Canada 4. St. Mary’s College, Moraga, California
|
| |
Abstract: | The time required to conjoin stimulus features in high-rate serial presentation tasks was estimated in auditory and visual modalities. In the visual experiment, targets were defined by color, orientation, or the conjunction of color and orientation features. Responses were fastest in color conditions, intermediate in orientation conditions, and slowest in conjunction conditions. Estimates of feature conjunction time (FCT) were derived on the basis of a model in which features were processed in parallel and then conjoined, permitting FCTs to be estimated from the difference in reaction times between conjunction and the slowest single-feature condition. Visual FCTs averaged 17 msec, but were negative for certain stimuli and subjects. In the auditory experiment, targets were defined by frequency, location, or the conjunction of frequency and location features. Responses were fastest in frequency conditions, but were faster in conjunction than in location conditions, yielding negative FCTs. The results from both experiments suggest that the processing of stimulus features occurs interactively during early stages of feature conjunction. |
| |
Keywords: | |
本文献已被 SpringerLink 等数据库收录! |
|