首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
A common assumption in the working memory literature is that the visual and auditory modalities have separate and independent memory stores. Recent evidence on visual working memory has suggested that resources are shared between representations, and that the precision of representations sets the limit for memory performance. We tested whether memory resources are also shared across sensory modalities. Memory precision for two visual (spatial frequency and orientation) and two auditory (pitch and tone duration) features was measured separately for each feature and for all possible feature combinations. Thus, only the memory load was varied, from one to four features, while keeping the stimuli similar. In Experiment 1, two gratings and two tones—both containing two varying features—were presented simultaneously. In Experiment 2, two gratings and two tones—each containing only one varying feature—were presented sequentially. The memory precision (delayed discrimination threshold) for a single feature was close to the perceptual threshold. However, as the number of features to be remembered was increased, the discrimination thresholds increased more than twofold. Importantly, the decrease in memory precision did not depend on the modality of the other feature(s), or on whether the features were in the same or in separate objects. Hence, simultaneously storing one visual and one auditory feature had an effect on memory precision equal to those of simultaneously storing two visual or two auditory features. The results show that working memory is limited by the precision of the stored representations, and that working memory can be described as a resource pool that is shared across modalities.  相似文献   

2.
Build-up of proactive interference (PI) with visual-picture and auditory-verbal input modalities and the subsequent release from PI following a change in modality was investigated in three experiments with boys and girls, as follows: Experiment I (n = 64) at two mean age levels, 7–6 and 10–5; Experiment II (n = 64) at mean age 7–6; and Experiment III (n = 48) at age 11–4. PI build-up occurred in both modalities for all ages tested. Release from PI occurred following a change from auditory to visual input but not following a visual to auditory shift. In the final experiment, this asymmetrical improvement in performance was dependent upon an interaction between the modality of the input and distractor task on the final or release trial; changing to visual input produced a release effect regardless of the distractor task modality, while auditory input was associated with improvement in recall if a visual distractor task was employed whether or not a shift in input modality had occurred. This improvement was hypothesized to represent a decrease in retroactive interference rather than a release from proactive interference.  相似文献   

3.
In order to perceive the world coherently, we need to integrate features of objects and events that are presented to our senses. Here we investigated the temporal limit of integration in unimodal visual and auditory as well as crossmodal auditory-visual conditions. Participants were presented with alternating visual and auditory stimuli and were asked to match them either within or between modalities. At alternation rates of about 4 Hz and higher, participants were no longer able to match visual and auditory stimuli across modalities correctly, while matching within either modality showed higher temporal limits. Manipulating different temporal stimulus characteristics (stimulus offsets and/or auditory-visual SOAs) did not change performance. Interestingly, the difference in temporal limits between crossmodal and unimodal conditions appears strikingly similar to temporal limit differences between unimodal conditions when additional features have to be integrated. We suggest that adding a modality across which sensory input is integrated has the same effect as adding an extra feature to be integrated within a single modality.  相似文献   

4.
本研究使用空间任务-转换范式,控制视、听刺激的突显性,探讨自下而上注意对视觉主导效应的影响。结果表明视、听刺激突显性显著地影响视觉主导效应,实验一中当听觉刺激为高突显性时,视觉主导效应显著减弱。实验二中当听觉刺激为高突显性并且视觉刺激为低突显性时,视觉主导效应进一步减弱但依然存在。结果支持偏向竞争理论,在跨通道视听交互过程中视觉刺激更具突显性,在多感觉整合过程中更具加工优势。  相似文献   

5.
This study investigated multisensory interactions in the perception of auditory and visual motion. When auditory and visual apparent motion streams are presented concurrently in opposite directions, participants often fail to discriminate the direction of motion of the auditory stream, whereas perception of the visual stream is unaffected by the direction of auditory motion (Experiment 1). This asymmetry persists even when the perceived quality of apparent motion is equated for the 2 modalities (Experiment 2). Subsequently, it was found that this visual modulation of auditory motion is caused by an illusory reversal in the perceived direction of sounds (Experiment 3). This "dynamic capture" effect occurs over and above ventriloquism among static events (Experiments 4 and 5), and it generalizes to continuous motion displays (Experiment 6). These data are discussed in light of related multisensory phenomena and their support for a "modality appropriateness" interpretation of multisensory integration in motion perception.  相似文献   

6.
Auditory and visual similarity was manipulated in a same-different reaction-time task to investigate the use of modality-specific codes in same-different judgments for pairs of letters. Experiment 1 showed that letters presented simultaneously in the auditory and visual modalities were matched on the auditory dimension. In Experiment 2, the letters were presented sequentially, and the modality of the second letter was randomly varied. Subjects matched the pairs on the modality dimension of the second letter even though the modality could not be reliably predicted. In Experiment 3, subjects judged adjacent pairs of letters presented for 50 msec, and in one condition they also named the letters. Matches were made on visual codes in both conditions. In general, the results indicate that when subjects are instructed to determine if two letters are the same, the letters will be matched on a modality-specific code in a way that will minimize the information processing necessary to complete the match.  相似文献   

7.
Shadowing (vocalization-at-presentation) was applied to a bisensory situation where different messages were simultaneously presented to the visual and auditory modalities. Three groups of subjects were employed: Group I shadowed the visual modality; Group II shadowed the auditory modality; Group III was a control, shadowing neither modality. Shadowing in the present experiment facilitated recall of the shadowed modality, particularly the visual modality, which is usually inferior to auditory recall.

It also became apparent that visual recall in an ordinary bisensory situation was minimal if not near an incidental level and that a true bisensory situation with equal division of attention between the two modalities employed does not exist.  相似文献   

8.
Parr LA 《Animal cognition》2004,7(3):171-178
The ability of organisms to discriminate social signals, such as affective displays, using different sensory modalities is important for social communication. However, a major problem for understanding the evolution and integration of multimodal signals is determining how humans and animals attend to different sensory modalities, and these different modalities contribute to the perception and categorization of social signals. Using a matching-to-sample procedure, chimpanzees discriminated videos of conspecifics' facial expressions that contained only auditory or only visual cues by selecting one of two facial expression photographs that matched the expression category represented by the sample. Other videos were edited to contain incongruent sensory cues, i.e., visual features of one expression but auditory features of another. In these cases, subjects were free to select the expression that matched either the auditory or visual modality, whichever was more salient for that expression type. Results showed that chimpanzees were able to discriminate facial expressions using only auditory or visual cues, and when these modalities were mixed. However, in these latter trials, depending on the expression category, clear preferences for either the visual or auditory modality emerged. Pant-hoots and play faces were discriminated preferentially using the auditory modality, while screams were discriminated preferentially using the visual modality. Therefore, depending on the type of expressive display, the auditory and visual modalities were differentially salient in ways that appear consistent with the ethological importance of that display's social function.  相似文献   

9.
Crossmodal selective attention was investigated in a cued task switching paradigm using bimodal visual and auditory stimulation. A cue indicated the imperative modality. Three levels of spatial S–R associations were established following perceptual (location), structural (numerical), and conceptual (verbal) set-level compatibility. In Experiment 1, participants switched attention between the auditory and visual modality either with a spatial-location or spatial-numerical stimulus set. In the spatial-location set, participants performed a localization judgment on left vs. right presented stimuli, whereas the spatial-numerical set required a magnitude judgment about a visually or auditorily presented number word. Single-modality blocks with unimodal stimuli were included as a control condition. In Experiment 2, the spatial-numerical stimulus set was replaced by a spatial-verbal stimulus set using direction words (e.g., “left”). RT data showed modality switch costs, which were asymmetric across modalities in the spatial-numerical and spatial-verbal stimulus set (i.e., larger for auditory than for visual stimuli), and congruency effects, which were asymmetric primarily in the spatial-location stimulus set (i.e., larger for auditory than for visual stimuli). This pattern of effects suggests task-dependent visual dominance.  相似文献   

10.
When participants judge multimodal audiovisual stimuli, the auditory information strongly dominates temporal judgments, whereas the visual information dominates spatial judgments. However, temporal judgments are not independent of spatial features. For example, in the kappa effect, the time interval between two marker stimuli appears longer when they originate from spatially distant sources rather than from the same source. We investigated the kappa effect for auditory markers presented with accompanying irrelevant visual stimuli. The spatial sources of the markers were varied such that they were either congruent or incongruent across modalities. In two experiments, we demonstrated that the spatial layout of the visual stimuli affected perceived auditory interval duration. This effect occurred although the visual stimuli were designated to be task-irrelevant for the duration reproduction task in Experiment 1, and even when the visual stimuli did not contain sufficient temporal information to perform a two-interval comparison task in Experiment 2. We conclude that the visual and auditory marker stimuli were integrated into a combined multisensory percept containing temporal as well as task-irrelevant spatial aspects of the stimulation. Through this multisensory integration process, visuospatial information affected even temporal judgments, which are typically dominated by the auditory modality.  相似文献   

11.
The present study examined the effects of cue-based preparation and cue-target modality mapping in crossmodal task switching. In two experiments, we randomly presented lateralized visual and auditory stimuli simultaneously. Subjects were asked to make a left/right judgment for a stimulus in only one of the modalities. Prior to each trial, the relevant stimulus modality was indicated by a visual or auditory cue. The cueing interval was manipulated to examine preparation. In Experiment 1, we used a corresponding mapping of cue-modality and stimulus modality, whereas in Experiment 2 the mapping of cue and stimulus modalities was reversed. We found reduced modality-switch costs with a long cueing interval, showing that attention shifts to stimulus modalities can be prepared, irrespective of cue-target modality mapping. We conclude that perceptual processing in crossmodal switching can be biased in a preparatory way towards task-relevant stimulus modalities.  相似文献   

12.
Contrasting results in visual and auditory working memory studies suggest that the mechanisms of association between location and identity of stimuli depend on the sensory modality of the input. In this auditory study, we tested whether the association of two features both encoded in the “what” stream is different from the association between a “what” and a “where” feature. In an old–new recognition task, blindfolded participants were presented with sequences of sounds varying in timbre, pitch and location. They were required to judge if either the timbre, pitch or location of a single-probe stimulus was identical or different to the timbre, pitch or location of one of the sounds of the previous sequence. Only variations in one of the three features were relevant for the task, whereas the other two features could vary, with task-irrelevant changes. Results showed that task-irrelevant variations in the “what” features (either timbre or pitch) caused an impaired recognition of sound location and in the other task-relevant “what” feature, whereas changes in sound location did not affect the recognition of either one of the “what” features. We conclude that the identity of sounds is incidentally processed even when not required by the task, whereas sound location is not maintained when task irrelevant.  相似文献   

13.
Two experiments comparing imaginative processing in different modalities and semantic processing were carried out to investigate the issue of whether conceptual knowledge can be represented in different format. Participants were asked to judge the similarity between visual images, auditory images, and olfactory images in the imaginative block, if two items belonged to the same category in the semantic block. Items were verbally cued in both experiments. The degree of similarity between the imaginative and semantic items was changed across experiments. Experiment 1 showed that the semantic processing was faster than the visual and the auditory imaginative processing, whereas no differentiation was possible between the semantic processing and the olfactory imaginative processing. Experiment 2 revealed that only the visual imaginative processing could be differentiated from the semantic processing in terms of accuracy. These results showed that the visual and auditory imaginative processing can be differentiated from the semantic processing, although both visual and auditory images strongly rely on semantic representations. On the contrary, no differentiation is possible within the olfactory domain. Results are discussed in the frame of the imagery debate.  相似文献   

14.
Models of duration bisection have focused on the effects of stimulus spacing and stimulus modality. However, interactions between stimulus spacing and stimulus modality have not been examined systematically. Two duration bisection experiments that address this issue are reported. Experiment 1 showed that stimulus spacing influenced the classification of auditory, but not visual, stimuli. Experiment 2 used a wider stimulus range, and showed stimulus spacing effects for both visual and auditory stimuli, although the effects were larger for auditory stimuli. A version of Temporal Range Frequency Theory was applied to the data, and was used to demonstrate that the qualitative pattern of results can be captured with the single assumption that the durations of visual stimuli are less discriminable from one another than are the durations of auditory stimuli.  相似文献   

15.
Perceptual judgments can be affected by expectancies regarding the likely target modality. This has been taken as evidence for selective attention to particular modalities, but alternative accounts remain possible in terms of response priming, criterion shifts, stimulus repetition, and spatial confounds. We examined whether attention to a sensory modality would still be apparent when these alternatives were ruled out. Subjects made a speeded detection response (Experiment 1), an intensity or color discrimination (Experiment 2), or a spatial discrimination response (Experiments 3 and 4) for auditory and visual targets presented in a random sequence. On each trial, a symbolic visual cue predicted the likely target modality. Responses were always more rapid and accurate for targets presented in the expected versus unexpected modality, implying that people can indeed selectively attend to the auditory or visual modalities. When subjects were cued to both the probable modality of a target and its likely spatial location (Experiment 4), separable modality-cuing and spatial-cuing effects were observed. These studies introduce appropriate methods for distinguishing attention to a modality from the confounding factors that have plagued previous normal and clinical research.  相似文献   

16.
Involuntary listening aids seeing: evidence from human electrophysiology   总被引:3,自引:0,他引:3  
It is well known that sensory events of one modality can influence judgments of sensory events in other modalities. For example, people respond more quickly to a target appearing at the location of a previous cue than to a target appearing at another location, even when the two stimuli are from different modalities. Such cross-modal interactions suggest that involuntary spatial attention mechanisms are not entirely modality-specific. In the present study, event-related brain potentials (ERPs) were recorded to elucidate the neural basis and timing of involuntary, cross-modal spatial attention effects. We found that orienting spatial attention to an irrelevant sound modulates the ERP to a subsequent visual target over modality-specific, extrastriate visual cortex, but only after the initial stages of sensory processing are completed. These findings are consistent with the proposal that involuntary spatial attention orienting to auditory and visual stimuli involves shared, or at least linked, brain mechanisms.  相似文献   

17.
Experiments 1 and 2 compared, with a single-stimulus procedure, the discrimination of filled and empty intervals in both auditory and visual modalities. In Experiment 1, in which intervals were about 250 msec, the discrimination was superior with empty intervals in both modalities. In Experiment 2, with intervals lasting about 50 msec, empty intervals showed superior performance with visual signals only. In Experiment 3, for the auditory modality at 250 msec, the discrimination was easier with empty intervals than with filled intervals with both the forced-choice (FC) and the single stimulus (SS) modes of presentation, and the discrimination was easier with the FC than with the SS method. Experiment 4, however, showed that at 50 and 250 msec, with a FC-adaptive procedure, there were no differences between filled and empty intervals in the auditory mode; the differences observed with the visual mode in Experiments 1 and 2 remained significant. Finally, Experiment 5 compared differential thresholds for four marker-type conditions, filled and empty intervals in the auditory and visual modes, for durations ranging from .125 to 4 sec. The results showed (1) that the differential threshold differences among marker types are important for short durations but decrease with longer durations, and (2) that a generalized Weber’s law generally holds for these conditions. The results as a whole are discussed in terms of timing mechanisms.  相似文献   

18.
Interactions Between Exogenous Auditory and Visual Spatial Attention   总被引:5,自引:0,他引:5  
Six experiments were carried out to investigate the issue of cross-modality between exogenous auditory and visual spatial attention employing Posner's cueing paradigm in detection, localization, and discrimination tasks. Results indicated cueing in detection tasks with visual or auditory cues and visual targets but not with auditory targets (Experiment 1). In the localization tasks, cueing was found with both visual and auditory targets. Inhibition of return was apparent only in the within-modality conditions (Experiment 2). This suggests that it is important whether the attention system is activated directly (within a modality) or indirectly (between modalities). Increasing the cue validity from 50% to 80% influenced performance only in the localization task (Experiment 4). These findings are interpreted as being indicative for modality-specific but interacting attention mechanisms. The results of Experiments 5 and 6 (up/down discrimination tasks) also show cross-modal cueing but not with visual cues and auditory targets. Furthermore, there was no inhibition of return in any condition. This suggests that some cueing effects might be task dependent.  相似文献   

19.
Four experiments examined the effects of encoding multiple standards in a temporal generalization task in the visual and auditory modalities both singly and cross-modally, using stimulus durations ranging, across different experiments, from 100 to 1,400 ms. Previous work has shown that encoding and storing multiple auditory standards of different durations resulted in systematic interference with the memory of the standard, characterized by a shift in the location of peak responding, and this result, from Ogden, Wearden, and Jones (2008), was replicated in the present Experiment 1. Experiment 2 employed the basic procedure of Ogden et al. using visual stimuli and found that encoding multiple visual standards did not lead to performance deterioration or any evidence of systematic interference between the standards. Experiments 3 and 4 examined potential cross-modal interference. When two standards of different modalities and durations were encoded and stored together there was also no evidence of interference between the two. Taken together, these results, and those of Ogden et al., suggest that, in humans, visual temporal reference memory may be more permanent than auditory reference memory and that auditory temporal information and visual temporal information do not mutually interfere in reference memory.  相似文献   

20.
Properties of auditory and visual sensory memory were compared by examining subjects' recognition performance of randomly generated binary auditory sequential frequency patterns and binary visual sequential color patterns within a forced-choice paradigm. Experiment 1 demonstrated serial-position effects in auditory and visual modalities consisting of both primacy and recency effects. Experiment 2 found that retention of auditory and visual information was remarkably similar when assessed across a 10 s interval. Experiments 3 and 4, taken together, showed that the recency effect in sensory memory is affected more by the type of response required (recognition vs. reproduction) than by the sensory modality employed. These studies suggest that auditory and visual sensory memory stores for nonverbal stimuli share similar properties with respect to serial-position effects and persistence over time.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号