首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Motion information available to different sensory modalities can interact at both perceptual and post-perceptual (i.e., decisional) stages of processing. However, to date, researchers have only been able to demonstrate the influence of one of these components at any given time, hence the relationship between them remains uncertain. We addressed the interplay between the perceptual and post-perceptual components of information processing by assessing their influence on performance within the same experimental paradigm. We used signal detection theory to discriminate changes in perceptual sensitivity (d') from shifts in response criterion (c) in performance on a detection (Experiment 1) and a classification (Experiment 2) task regarding the direction of auditory apparent motion streams presented in noise. In the critical conditions, a visual motion distractor moving either leftward or rightward was presented together with the auditory motion. The results demonstrated a significant decrease in sensitivity to the direction of the auditory targets in the crossmodal conditions as compared to the unimodal baseline conditions that was independent of the relative direction of the visual distractor. In addition, we also observed significant shifts in response criterion, which were dependent on the relative direction of the distractor apparent motion. These results support the view that the perceptual and decisional components involved in audiovisual interactions in motion processing can coexist but are largely independent of one another.  相似文献   

2.
In a natural environment, objects that we look for often make characteristic sounds. A hiding cat may meow, or the keys in the cluttered drawer may jingle when moved. Using a visual search paradigm, we demonstrated that characteristic sounds facilitated visual localization of objects, even when the sounds carried no location information. For example, finding a cat was faster when participants heard a meow sound. In contrast, sounds had no effect when participants searched for names rather than pictures of objects. For example, hearing “meow” did not facilitate localization of the word cat. These results suggest that characteristic sounds cross-modally enhance visual (rather than conceptual) processing of the corresponding objects. Our behavioral demonstration of object-based cross-modal enhancement complements the extensive literature on space-based cross-modal interactions. When looking for your keys next time, you might want to play jingling sounds.  相似文献   

3.
Implicit memory is often thought to reflect an influence of past experience on perceptual processes, yet priming effects are found when the perceptual format of stimuli changes between study and test episodes. Such cross-modal priming effects have been hypothesized to depend upon stimulus recoding processes whereby a stimulus presented in one modality is converted to other perceptual formats. The present research examined recoding accounts of cross-modal priming by testing patients with verbal production deficits that presumably impair the conversion of visual words into auditory/phonological forms. The patients showed normal priming in a visual stem completion task following visual study (Experiment 1), but showed impairments following auditory study in both implicit (Experiment 2) and explicit (Experiment 3) stem completion. The results are consistent with the hypothesis that verbal production processes contribute to the recoding of visual stimuli and support cross-modal priming. The results also indicate that shared processes contribute to both explicit memory and cross-modal implicit memory.  相似文献   

4.
Three experiments examined repetition priming for meaningful environmental sounds (e.g., clock ticking, tooth brushing, toilet flushing, etc.) in a sound stem identification paradigm using brief sound cues. Prior encoding of target sounds together with their associated names facilitated subsequent identification of sound stems relative to nonstudied controls. In contrast, prior exposure to names alone in the absence of the environmental sounds did not prime subsequent sound stem identification performance at all (Experiments 1 and 3). Explicit and implicit memory were dissociated such that sound stem cued recall was higher following semantic than nonsemantic encoding, whereas sound priming was insensitive to manipulations of depth encoding (Experiments 2 and 3). These results extend the findings of long-term repetition priming into the auditory nonverbal domain and suggest that priming for environmental sounds is mediated primarily by perceptual processes.  相似文献   

5.
The influence of the specificity of the visual context on the identification of environmental sounds (i.e., product sounds) was investigated. Two different visual context types (i.e., scene and object contexts)—which varied in the specificity of the semantic information—and a control condition (meaningless images) were employed. A contextual priming paradigm was used. Identification accuracy and response times were determined in two context conditions and one control condition. The results suggest that visual context has a positive effect on sound identification. In addition, two types of product sounds (location-specific and event-specific sounds) were observed which exhibited different sensitivities to scene and object contexts. Furthermore, the results suggest that conceptual interactions exist between an object and a context that do not share the same perceptual domain. Therefore, context should be regarded as a network of conceptually associated items in memory.  相似文献   

6.
The present study examines implicit phonetic symbolism which posits that arbitrary linguistic sound is related to certain aspects of characteristics of other modalities, such as color, touch, or emotion. In consonant discrimination and lightness discrimination using Garner's speeded classification paradigm, spoken sounds (voiced/voiceless consonants) and lightness of visual stimuli (black/white squares) were systematically varied to assess cross-modal interactions. Congruent audio-visual pairs (voiced consonants and black, and between voiceless consonants and white) facilitated consonant discrimination. In lightness discrimination, no congruent facilitation or congruence effect was observed. These results indicated that cross-modal interactions in implicit phonetic symbolism can be found in correlations between linguistic spoken sounds and visual lightness.  相似文献   

7.
In a previous study, Ward (1994) reported that spatially uninformative visual cues orient auditory attention but that spatially uninformative auditory cues fail to orient visual attention. This cross-modal asymmetry is consistent with other intersensory perceptual phenomena that are dominated by the visual modality (e.g., ventriloquism). However, Spence and Driver (1997) found exactly the opposite asymmetry under different experimental conditions and with a different task. In spite of the several differences between the two studies, Spence and Driver (see also Driver & Spence, 1998) argued that Ward's findings might have arisen from response-priming effects, and that the cross-modal asymmetry they themselves reported, in which auditory cues affect responses to visual targets but not vice versa, is in fact the correct result. The present study investigated cross-modal interactions in stimulus-driven spatial attention orienting under Ward's complex cue environment conditions using an experimental procedure that eliminates response-priming artifacts. The results demonstrate that the cross-modal asymmetry reported by Ward (1994) does occur when the cue environment is complex. We argue that strategic effects in cross-modal stimulus-driven orienting of attention are responsible for the opposite asymmetries found by Ward and by Spence and Driver (1997).  相似文献   

8.
Previous studies have suggested that the human visual system processes faces and bodies holistically—that is, the different body parts are integrated into a unified representation. However, the time course of this integrative process is less known. In the present study, we investigated this issue by recording event-related potentials evoked by a face and two hands presented simultaneously and in different configurations. When the hands were rotated to obtain a biologically implausible configuration, a reduction of the P2 amplitude was observed relative to the condition in which the face and hands were retained in their veridical configuration and were supplemented with visual cues to highlight further the overall body posture. Our results show that the P2 component is sensitive to manipulations affecting the configuration of face and hand stimuli and suggest that the P2 reflects the operation of perceptual mechanisms responsible for the integrated processing of visually presented body parts.  相似文献   

9.
One of the central themes in the study of language acquisition is the gap between the linguistic knowledge that learners demonstrate, and the apparent inadequacy of linguistic input to support induction of this knowledge. One of the first linguistic abilities in the course of development to exemplify this problem is in speech perception: specifically, learning the sound system of one’s native language. Native-language sound systems are defined by meaningful contrasts among words in a language, yet infants learn these sound patterns before any significant numbers of words are acquired. Previous approaches to this learning problem have suggested that infants can learn phonetic categories from statistical analysis of auditory input, without regard to word referents. Experimental evidence presented here suggests instead that young infants can use visual cues present in word-labeling situations to categorize phonetic information. In Experiment 1, 9-month-old English-learning infants failed to discriminate two non-native phonetic categories, establishing baseline performance in a perceptual discrimination task. In Experiment 2, these infants succeeded at discrimination after watching contrasting visual cues (i.e., videos of two novel objects) paired consistently with the two non-native phonetic categories. In Experiment 3, these infants failed at discrimination after watching the same visual cues, but paired inconsistently with the two phonetic categories. At an age before which memory of word labels is demonstrated in the laboratory, 9-month-old infants use contrastive pairings between objects and sounds to influence their phonetic sensitivity. Phonetic learning may have a more functional basis than previous statistical learning mechanisms assume: infants may use cross-modal associations inherent in social contexts to learn native-language phonetic categories.  相似文献   

10.
Young men’s errors in sexual perception have been linked to sexual coercion. The current investigation sought to explicate the perceptual and decisional sources of these social perception errors, as well as their link to risk for sexual violence. General Recognition Theory (GRT; [Ashby, F. G., & Townsend, J. T. (1986). Varieties of perceptual independence. Psychological Review, 93, 154-179]) was used to estimate participants’ ability to discriminate between affective cues and clothing style cues and to measure illusory correlations between men’s perception of women’s clothing style and sexual interest. High-risk men were less sensitive to the distinction between women’s friendly and sexual interest cues relative to other men. In addition, they were more likely to perceive an illusory correlation between women’s diagnostic sexual interest cues (e.g., facial affect) and non-diagnostic cues (e.g., provocative clothing), which increases the probability that high-risk men will misperceive friendly women as intending to communicate sexual interest. The results provide information about the degree of risk conferred by individual differences in perceptual processing of women’s interest cues, and also illustrate how translational scientists might adapt GRT to examine research questions about individual differences in social perception.  相似文献   

11.
Spatial attention and audiovisual interactions in apparent motion   总被引:1,自引:0,他引:1  
In this study, the authors combined the cross-modal dynamic capture task (involving the horizontal apparent movement of visual and auditory stimuli) with spatial cuing in the vertical dimension to investigate the role of spatial attention in cross-modal interactions during motion perception. Spatial attention was manipulated endogenously, either by means of a blocked design or by predictive peripheral cues, and exogenously by means of nonpredictive peripheral cues. The results of 3 experiments demonstrate a reduction in the magnitude of the cross-modal dynamic capture effect on cued trials compared with uncued trials. The introduction of neutral cues (Experiments 4 and 5) confirmed the existence of both attentional costs and benefits. This attention-related reduction in cross-modal dynamic capture was larger when a peripheral cue was used compared with when attention was oriented in a purely endogenous manner. In sum, the results suggest that spatial attention reduces illusory binding by facilitating the segregation of unimodal signals, thereby modulating audiovisual interactions in information processing. Thus, the effect of spatial attention occurs prior to or at the same time as cross-modal interactions involving motion information.  相似文献   

12.
Two identical visual targets moving across each other can be perceived either to bounce off or to stream through each other. A brief sound at the moment the targets coincide biases perception toward bouncing. We found that this bounce-inducing effect was attenuated when other identical sounds (auditory flankers) were presented 300 ms before and after the simultaneous sound. The attenuation occurred only when the simultaneous sound and auditory flankers had similar acoustic characteristics and the simultaneous sound was not salient. These results suggest that there is an aspect of auditory-grouping (saliency-assigning) processes that is context-sensitive and can be utilized by the visual system for solving ambiguity. Furthermore, control experiments revealed that such auditory context did not affect the perceptual qualities of the simultaneous sound. Because the attenuation effect is not manifest in the perception of acoustic characteristics of individual sound elements, we conclude that it is a genuine cross-modal effect.  相似文献   

13.
Does the perceptual processing of faces flexibly adapt to the requirements of the categorization task at hand, or does it operate independently of this cognitive context? Behavioral studies have shown that the fine and coarse spatial scales of a face are differentially processed depending on the categorization task performed, thus suggesting that the latter can influence stimulus perception. Here, we investigated the time course of these task influences on perceptual processing by examining the visual N170 face‐sensitive Event‐Related Potential (ERP), while observers categorized faces for their gender and familiarity. Stimuli were full spectrum, or filtered versions that preserved either coarse or fine scale information of the faces. Behavioral results replicated previous findings of a differential processing of coarse and fine spatial scales across tasks. In addition, the N170 amplitude was larger in the Gender task as compared to the Familiarity task for LSF faces exclusively, thus showing that task demands differentially modulated the spatial scale processing on faces. These results suggest that the diagnosticity of scale‐specific cues in categorization tasks can modulate face processing.  相似文献   

14.
在有多人同时说话的嘈杂环境中,为什么具有正常听力的人能在一定的程度上听懂目标语句?研究这个著名的“鸡尾酒会”问题的一个新进展是将干扰言语的作用区分出了能量掩蔽和信息掩蔽两种成分。与发生在外周系统的能量掩蔽不同,信息掩蔽发生在心理层次并受到认知过程的调节。因此,主观空间分离、与目标语句节奏相关的视觉信号以及对目标语句某些特征的熟悉程度等知觉线索都具有去掩蔽作用。考察可减少信息掩蔽的知觉线索的交互作用及其高级认知调节是今后重要的研究内容  相似文献   

15.
Past studies show that novel auditory stimuli, presented in the context of an otherwise repeated sound, capture participants’ attention away from a focal task, resulting in measurable behavioral distraction. Novel sounds are traditionally defined as rare and unexpected but past studies have not sought to disentangle these concepts directly. Using a cross-modal oddball task, we contrasted these aspects orthogonally by manipulating the base rate and conditional probabilities of sound events. We report for the first time that behavioral distraction does not result from a sound’s novelty per se but from the violation of the cognitive system’s expectation based on the learning of conditional probabilities and, to some extent, the occurrence of a perceptual change from one sound to another.  相似文献   

16.
It is well known that the nervous system combines information from different cues within and across sensory modalities to improve performance on perceptual tasks. In this article, we present results showing that in a visual motion-detection task, concurrent auditory motion stimuli improve accuracy even when they do not provide any useful information for the task. When participants judged which of two stimulus intervals contained visual coherent motion, the addition of identical moving sounds to both intervals improved accuracy. However, this enhancement occurred only with sounds that moved in the same direction as the visual motion. Therefore, it appears that the observed benefit of auditory stimulation is due to auditory-visual interactions at a sensory level. Thus, auditory and visual motion-processing pathways interact at a sensory-representation level in addition to the level at which perceptual estimates are combined.  相似文献   

17.
跨通道的内源性选择注意   总被引:2,自引:2,他引:0  
赵晨  杨华海  张侃 《心理学报》1999,32(2):148-153
该实验利用空间线索技术研究跨视觉和听觉通道的内源性选择性注意,实验结果表明视觉中央线索能可靠地引导出内源性视觉选择性注意,听觉中央线索在较长的SOA(至少500ms)条件下也能引导出内源性视觉选择性注意,支持视觉和听觉具有特异性的注意加工通道,但两者之间存在相互连接的假说。  相似文献   

18.
Martino G  Marks LE 《Perception》1999,28(7):903-923
We tested the semantic coding hypothesis, which states that cross-modal interactions observed in speeded classification tasks arise after perceptual information is recoded into an abstract format common to perceptual and linguistic systems. Using a speeded classification task, we first confirmed the presence of congruence interactions between auditory pitch and visual lightness and observed Garner-type interference with nonlinguistic (perceptual) stimuli (low-frequency and high-frequency tones, black and white squares). Subsequently, we found that modifying the visual stimuli by (a) making them lexical (related words) or (b) reducing their compactness or figural 'goodness' altered congruence effects and Garner interference. The results are consistent with the semantic coding hypothesis, but only in part, and suggest the need for additional assumptions regarding the role of perceptual organization in cross-modal dimensional interactions.  相似文献   

19.
Recently, Guzman-Martinez, Ortega, Grabowecky, Mossbridge, and Suzuki (Current Biology : CB, 22(5), 383–388, 2012) reported that observers could systematically match auditory amplitude modulations and tactile amplitude modulations to visual spatial frequencies, proposing that these cross-modal matches produced automatic attentional effects. Using a series of visual search tasks, we investigated whether informative auditory, tactile, or bimodal cues can guide attention toward a visual Gabor of matched spatial frequency (among others with different spatial frequencies). These cues improved visual search for some but not all frequencies. Auditory cues improved search only for the lowest and highest spatial frequencies, whereas tactile cues were more effective and frequency specific, although less effective than visual cues. Importantly, although tactile cues could produce efficient search when informative, they had no effect when uninformative. This suggests that cross-modal frequency matching occurs at a cognitive rather than sensory level and, therefore, influences visual search through voluntary, goal-directed behavior, rather than automatic attentional capture.  相似文献   

20.
Speech perception is an ecologically important example of the highly context-dependent nature of perception; adjacent speech, and even nonspeech, sounds influence how listeners categorize speech. Some theories emphasize linguistic or articulation-based processes in speech-elicited context effects and peripheral (cochlear) auditory perceptual interactions in non-speech-elicited context effects. The present studies challenge this division. Results of three experiments indicate that acoustic histories composed of sine-wave tones drawn from spectral distributions with different mean frequencies robustly affect speech categorization. These context effects were observed even when the acoustic context temporally adjacent to the speech stimulus was held constant and when more than a second of silence or multiple intervening sounds separated the nonlinguistic acoustic context and speech targets. These experiments indicate that speech categorization is sensitive to statistical distributions of spectral information, even if the distributions are composed of nonlinguistic elements. Acoustic context need be neither linguistic nor local to influence speech perception.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号