首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
The last couple of years have seen a rapid growth of interest (especially amongst cognitive psychologists, cognitive neuroscientists, and developmental researchers) in the study of crossmodal correspondences – the tendency for our brains (not to mention the brains of other species) to preferentially associate certain features or dimensions of stimuli across the senses. By now, robust empirical evidence supports the existence of numerous crossmodal correspondences, affecting people’s performance across a wide range of psychological tasks – in everything from the redundant target effect paradigm through to studies of the Implicit Association Test, and from speeded discrimination/classification tasks through to unspeeded spatial localisation and temporal order judgment tasks. However, one question that has yet to receive a satisfactory answer is whether crossmodal correspondences automatically affect people’s performance (in all, or at least in a subset of tasks), as opposed to reflecting more of a strategic, or top-down, phenomenon. Here, we review the latest research on the topic of crossmodal correspondences to have addressed this issue. We argue that answering the question will require researchers to be more precise in terms of defining what exactly automaticity entails. Furthermore, one’s answer to the automaticity question may also hinge on the answer to a second question: Namely, whether crossmodal correspondences are all ‘of a kind’, or whether instead there may be several different kinds of crossmodal mapping (e.g., statistical, structural, and semantic). Different answers to the automaticity question may then be revealed depending on the type of correspondence under consideration. We make a number of suggestions for future research that might help to determine just how automatic crossmodal correspondences really are.  相似文献   

2.
The ??pip-and-pop effect?? refers to the facilitation of search for a visual target (a horizontal or vertical bar whose color changes frequently) among multiple visual distractors (tilted bars also changing color unpredictably) by the presentation of a spatially uninformative auditory cue synchronized with the color change of the visual target. In the present study, the visual stimuli in the search display changed brightness instead of color, and the crossmodal congruency between the pitch of the auditory cue and the brightness of the visual target was manipulated. When cue presence and cue congruency were randomly varied between trials (Experiment 1), both congruent cues (low-frequency tones synchronized with dark target states or high-frequency tones synchronized with bright target states) and incongruent cues (the reversed mapping) facilitated visual search performance equally, relative to a no-cue baseline condition. However, when cue congruency was blocked and the participants were informed about the pitch?Cbrightness mapping in the cue-present blocks (Experiment 2), performance was significantly enhanced when the cue and target were crossmodally congruent as compared to when they were incongruent. These results therefore suggest that the crossmodal congruency between auditory pitch and visual brightness can influence performance in the pip-and-pop task by means of top-down facilitation.  相似文献   

3.
In an attempt to facilitate visual recall when material is presented under bisensory simultaneous conditions (i.e., visual and auditory stimuli are presented together), auditory material was delayed up to 1/4 sec relative to the onset of the visual material. Visual recall, however, remained stable across the auditory delays, suggesting a limitation in the visual system beyond that associated with the simultaneous occurrence of auditory material.  相似文献   

4.
Responses to unimodal and multimodal attributes of a compouMauxlitoryMsuat stimulus were investigated in 4-, 6-, 8-, and 10-month-old infants. First, infants were habituated to a compound stimulus consisting of a visual stimulus that moved up anddown on a video monitor and a sound that occurred each time the visual stimulus reversed direction at the bottom. Once each infant met a habituation criterion, a series of test trials was administered to assess responsiveness to the components of the compound stimulus. Response was defined as the total duration of visual fixation in each trial. In the two unimodal test trials, the rate at which the component was presented was changed while the rate of the other component remained the same, whereas in the bimodal test trial the rate of both components was changed simultaneously. Results indicated that infants at each age successfully discriminated the bimodal and the two unimodal changes and that regression to the mean did not account for the results. Results also showed that disruption of the temporal relationship that accompanied the change in rate in the two unimodal test trials was also discriminable, but rate changes appeared to play a greater role in responsiveness than did synchrony changes. Considered together with results from similar prior studies, the current results are consistent with the modality appropriateness hypothesis in showing that discrimination of temporal changes in the auditory and visual modalities is dependent on the specialization of the sensory modalities.  相似文献   

5.
A chimpanzee acquired an auditory–visual intermodal matching-to-sample (AVMTS) task, in which, following the presentation of a sample sound, the subject had to select from two alternatives a photograph that corresponded to the sample. The acquired AVMTS performance might shed light on chimpanzee intermodal cognition, which is one of the least understood aspects in chimpanzee cognition. The first aim of this paper was to describe the training process of the task. The second aim was to describe through a series of experiments the features of the chimpanzee AVMTS performance in comparison with results obtained in a visual intramodal matching task, in which a visual stimulus alone served as the sample. The results show that the acquisition of AVMTS was facilitated by the alternation of auditory presentation and audio-visual presentation (i.e., the sample sound together with a visual presentation of the object producing the particular sample sound). Once AVMTS performance was established for the limited number of stimulus sets, the subject showed rapid transfer of the performance to novel sets. However, the subject showed a steep decay of matching performance as a function of the delay interval between the sample and the choice alternative presentations when the sound alone, but not the visual stimulus alone, served as the sample. This might suggest a cognitive limitation for the chimpanzee in auditory-related tasks. Accepted after revision: 11 September 2001 Electronic Publication  相似文献   

6.
In the present exploratory study based on 7 subjects, we examined the composition of magnetoencephalographic (MEG) brain oscillations induced by the presentation of an auditory, visual, and audio-visual stimulus (a talking face) using an oddball paradigm. The composition of brain oscillations were assessed here by analyzing the probability-classification of short-term MEG spectral patterns. The probability index for particular brain oscillations being elicited was dependent on the type and the modality of the sensory percept. The maintenance of the integrated audio-visual percept was accompanied by the unique composition of distributed brain oscillations typical of auditory and visual modality, and the contribution of brain oscillations characteristic for visual modality was dominant. Oscillations around 20 Hz were characteristic for the maintenance of integrated audio-visual percept. Identifying the actual composition of brain oscillations allowed us (1) to distinguish two subjectively/consciously identical mental percepts, and (2) to characterize the types of brain functions involved in the maintenance of the multi-sensory percept.  相似文献   

7.
The rare presentation of a sound that deviates from the auditory background tends to capture attention, which is known to impede cognitive functioning. Such disruption is usually measured using performance on a concurrent visual task. Growing evidence recently showed that the pupillary dilation response (PDR) could index the attentional response triggered by a deviant sound. Given that the pupil diameter is sensitive to several vision-related factors, it is unclear whether the PDR could serve to study attentional capture in such contexts. Hence, the present study aimed at verifying whether the PDR can be used as a proxy for auditory attentional capture while a visual serial recall task (Experiment 1) or a reading comprehension task (Experiment 2) – respectively producing changes in luminance and gaze position – is being performed. Results showed that presenting a deviant sound within steady-state standard sounds elicited larger PDRs than a standard sound. Moreover, the magnitude of these PDRs was positively related to the amount of performance disruption produced by deviant sounds in Experiment 1. Performance remained unaffected by the deviants in Experiment 2, thereby implying that the PDR may be a more sensitive attention-capture index than behavioural measures. These results suggest that the PDR can be used to assess attentional capture by a deviant sound in contexts where the pupil diameter can be modulated by the visual environment.  相似文献   

8.
Crossmodal correspondences have often been demonstrated using congruency effects between pairs of stimuli in different sensory modalities that vary along separate dimensions. To date, however, it is still unclear the extent to which these correspondences are relative versus absolute in nature: that is, whether they result from pre-defined values that rigidly link the two dimensions or rather result from flexible values related to the previous occurrence of the crossmodal stimuli. Here, we investigated this issue in a speeded classification task featuring the correspondence between auditory pitch and visual size (e.g., congruent correspondence between high pitch/small disc and low pitch/large disc). Participants classified the size of the visual stimuli (large vs. small) while hearing concurrent high- or low-pitched task-irrelevant sounds. On some trials, visual stimuli were paired instead with “intermediate” pitch, that could be interpreted differently according to the auditory stimulus on the preceding trial (i.e., as “lower” following the presentation of a high pitch tone, but as “higher” following the presentation of a low pitch tone). Performance on sequence-congruent trials (e.g., when a small disc paired with the intermediate-pitched tone was preceded by a low pitch tone) was compared to sequence-incongruent trials (e.g., when a small disc paired with the intermediate-pitch tone was by a high-pitched tone). The results revealed faster classification responses on sequence-congruent than on sequence-incongruent trials. This demonstrates that the effect of the pitch/size correspondence is relative in nature, and subjected to trial-by-trial interpretation of the stimulus pair.  相似文献   

9.
Four experiments examined judgements of the duration of auditory and visual stimuli. Two used a bisection method, and two used verbal estimation. Auditory/visual differences were found when durations of auditory and visual stimuli were explicitly compared and when durations from both modalities were mixed in partition bisection. Differences in verbal estimation were also found both when people received a single modality and when they received both. In all cases, the auditory stimuli appeared longer than the visual stimuli, and the effect was greater at longer stimulus durations, consistent with a “pacemaker speed” interpretation of the effect. Results suggested that Penney, Gibbon, and Meck's (2000) “memory mixing” account of auditory/visual differences in duration judgements, while correct in some circumstances, was incomplete, and that in some cases people were basing their judgements on some preexisting temporal standard.  相似文献   

10.
Four experiments examined judgements of the duration of auditory and visual stimuli. Two used a bisection method, and two used verbal estimation. Auditory/visual differences were found when durations of auditory and visual stimuli were explicitly compared and when durations from both modalities were mixed in partition bisection. Differences in verbal estimation were also found both when people received a single modality and when they received both. In all cases, the auditory stimuli appeared longer than the visual stimuli, and the effect was greater at longer stimulus durations, consistent with a “pacemaker speed” interpretation of the effect. Results suggested that Penney, Gibbon, and Meck's (2000) “memory mixing” account of auditory/visual differences in duration judgements, while correct in some circumstances, was incomplete, and that in some cases people were basing their judgements on some preexisting temporal standard.  相似文献   

11.
The current study evaluated the effectiveness of a go/no-go successive matching-to-sample procedure (S-MTS) to establish auditory–visual equivalence classes with college students. A sample and a comparison were presented, one at a time, in the same location. During training, after an auditory stimulus was presented, a green box appeared in the center of the screen for participants to touch to produce the comparison. Touching the visual comparison that was related to the auditory sample (e.g., A1B1) produced points, while touching or refraining from touching an unrelated comparison (e.g., A1B2) produced no consequences. Following AB/AC training, participants were tested on untrained relations (i.e., BA/CA and BC/CB), as well as tacting and sorting. During BA/CA relations tests, after touching the visual sample, the auditory stimulus was presented along with a white box for participants to respond. During BC/CB relations tests, after touching the visual sample, a visual comparison appeared. Across 2 experiments, all participants met emergence criterion for untrained relations and for sorting. Additionally, 14 out of 24 participants tacted all visual stimuli correctly. Results suggest the auditory–visual S-MTS procedure is an effective alternative to simultaneous MTS for establishing conditional relations and auditory-visual equivalence classes.  相似文献   

12.
Sighted individuals are less accurate and slower to localize sounds coming from the peripheral space than sounds coming from the frontal space. This specific bias in favour of the frontal auditory space seems reduced in early blind individuals, who are particularly better than sighted individuals at localizing sounds coming from the peripheral space. Currently, it is not clear to what extent this bias in the auditory space is a general phenomenon or if it applies only to spatial processing (i.e. sound localization). In our approach we compared the performance of early blind participants with that of sighted subjects during a frequency discrimination task with sounds originating either from frontal or peripheral locations. Results showed that early blind participants discriminated faster than sighted subjects both peripheral and frontal sounds. In addition, sighted subjects were faster at discriminating frontal sounds than peripheral ones, whereas early blind participants showed equal discrimination speed for frontal and peripheral sounds. We conclude that the spatial bias observed in sighted subjects reflects an unbalance in the spatial distribution of auditory attention resources that is induced by visual experience.  相似文献   

13.
Two experiments tested humans on a memory for duration task based on the method of Wearden and Ferrara (1993) Wearden, J. H. and Ferrara, A. 1993. Subjective shortening in humans' memory for stimulus duration. Quarterly Journal of Experimental Psychology, 46B: 163186.  [Google Scholar], which had previously provided evidence for subjective shortening in memory for stimulus duration. Auditory stimuli were tones (filled) or click-defined intervals (unfilled). Filled visual stimuli were either squares or lines, with the unfilled interval being the time between two line presentations. In Experiment 1, good evidence for subjective shortening was found when filled and unfilled visual stimuli, or filled auditory stimuli, were used, but evidence for subjective shortening with unfilled auditory stimuli was more ambiguous. Experiment 2 used a simplified variant of the Wearden and Ferrara task, and evidence for subjective shortening was obtained from all four stimulus types.  相似文献   

14.
Two experiments tested humans on a memory for duration task based on the method of Wearden and Ferrara (1993), which had previously provided evidence for subjective shortening in memory for stimulus duration. Auditory stimuli were tones (filled) or click-defined intervals (unfilled). Filled visual stimuli were either squares or lines, with the unfilled interval being the time between two line presentations. In Experiment 1, good evidence for subjective shortening was found when filled and unfilled visual stimuli, or filled auditory stimuli, were used, but evidence for subjective shortening with unfilled auditory stimuli was more ambiguous. Experiment 2 used a simplified variant of the Wearden and Ferrara task, and evidence for subjective shortening was obtained from all four stimulus types.  相似文献   

15.
Skrzypulec  Błażej 《Synthese》2021,198(3):2101-2127
Synthese - It is commonly believed that human perceptual experiences can be, and usually are, multimodal. What is more, a stronger thesis is often proposed that some perceptual multimodal...  相似文献   

16.
Since Köhler’s experiments in the 1920s, researchers have demonstrated a correspondence between words and shapes. Dubbed the “Bouba–Kiki” effect, these auditory–visual associations extend across cultures and are thought to be universal. More recently the effect has been shown in other modalities including taste, suggesting the effect is independent of vision. The study presented here tested the “Bouba–Kiki” effect in the auditory–haptic modalities, using 2D cut-outs and 3D models based on Köhler’s original drawings. Presented with shapes they could feel but not see, sighted participants showed a robust “Bouba–Kiki” effect. However, in a sample of people with a range of visual impairments, from congenital total blindness to partial sight, the effect was significantly less pronounced. The findings suggest that, in the absence of a direct visual stimulus, visual imagery plays a role in crossmodal integration.  相似文献   

17.
We used a probe-dot procedure to examine the roles of excitatory attentional guidance and distractor suppression in search for movement-form conjunctions. Participants in Experiment?1 completed a conjunction (moving X amongst moving Os and static Xs) and two single-feature (moving X amongst moving Os, and static X amongst static Os) conditions. "Active" participants searched for the target, whereas "passive" participants viewed the displays without responding. Subsequently, both groups located (left or right) a probe dot appearing in either an occupied or an unoccupied location. In the conjunction condition, the active group located probes presented on static distractors more slowly than probes presented on moving distractors, reversing the direction of the difference found within the passive group. This disadvantage for probes on static items was much stronger in conjunction than in single-feature search. The same pattern of results was replicated in Experiment?2, which used a go/no-go procedure. Experiment?3 extended the go/no-go procedure to the case of search for a static target and revealed increased probe localisation times as a consequence of active search, primarily for probes on moving distractor items. The results demonstrated attentional guidance by inhibition of distractors in conjunction search.  相似文献   

18.
This study investigated the phonological processes with bilingual readers of Korean and Chinese. Three types of same–different matching between the prime and target were compared. The critical point was on whether the phonological information of English was activated automatically in a semantic judgment task involving only Korean and Chinese. The results showed that the latency of the conditions (S+P?, S?P? and S?P+) was significantly different; latencies in the S?P+ condition where there is no semantic but with phonological relations were slower than in the S?P? condition where there are neither semantic nor phonological relations. The implication for phonological recoding was discussed.  相似文献   

19.
The aim of this study was to explore whether the content of a simple concurrent verbal load task determines the extent of its interference on memory for coloured shapes. The task consisted of remembering four visual items while repeating aloud a pair of words that varied in terms of imageability and relatedness to the task set. At test, a cue appeared that was either the colour or the shape of one of the previously seen objects, with participants required to select the object's other feature from a visual array. During encoding and retention, there were four verbal load conditions: (a) a related, shape–colour pair (from outside the experimental set, i.e., “pink square”); (b) a pair of unrelated but visually imageable, concrete, words (i.e., “big elephant”); (c) a pair of unrelated and abstract words (i.e., “critical event”); and (d) no verbal load. Results showed differential effects of these verbal load conditions. In particular, imageable words (concrete and related conditions) interfered to a greater degree than abstract words. Possible implications for how visual working memory interacts with verbal memory and long-term memory are discussed.  相似文献   

20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号