首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Adults and children (5- and 8-year-olds) performed a temporal bisection task with either auditory or visual signals and either a short (0.5-1.0s) or long (4.0-8.0s) duration range. Their working memory and attentional capacities were assessed by a series of neuropsychological tests administered in both the auditory and visual modalities. Results showed an age-related improvement in the ability to discriminate time regardless of the sensory modality and duration. However, this improvement was seen to occur more quickly for auditory signals than for visual signals and for short durations rather than for long durations. The younger children exhibited the poorest ability to discriminate time for long durations presented in the visual modality. Statistical analyses of the neuropsychological scores revealed that an increase in working memory and attentional capacities in the visuospatial modality was the best predictor of age-related changes in temporal bisection performance for both visual and auditory stimuli. In addition, the poorer time sensitivity for visual stimuli than for auditory stimuli, especially in the younger children, was explained by the fact that the temporal processing of visual stimuli requires more executive attention than that of auditory stimuli.  相似文献   

2.
When participants judge multimodal audiovisual stimuli, the auditory information strongly dominates temporal judgments, whereas the visual information dominates spatial judgments. However, temporal judgments are not independent of spatial features. For example, in the kappa effect, the time interval between two marker stimuli appears longer when they originate from spatially distant sources rather than from the same source. We investigated the kappa effect for auditory markers presented with accompanying irrelevant visual stimuli. The spatial sources of the markers were varied such that they were either congruent or incongruent across modalities. In two experiments, we demonstrated that the spatial layout of the visual stimuli affected perceived auditory interval duration. This effect occurred although the visual stimuli were designated to be task-irrelevant for the duration reproduction task in Experiment 1, and even when the visual stimuli did not contain sufficient temporal information to perform a two-interval comparison task in Experiment 2. We conclude that the visual and auditory marker stimuli were integrated into a combined multisensory percept containing temporal as well as task-irrelevant spatial aspects of the stimulation. Through this multisensory integration process, visuospatial information affected even temporal judgments, which are typically dominated by the auditory modality.  相似文献   

3.
Parr LA 《Animal cognition》2004,7(3):171-178
The ability of organisms to discriminate social signals, such as affective displays, using different sensory modalities is important for social communication. However, a major problem for understanding the evolution and integration of multimodal signals is determining how humans and animals attend to different sensory modalities, and these different modalities contribute to the perception and categorization of social signals. Using a matching-to-sample procedure, chimpanzees discriminated videos of conspecifics' facial expressions that contained only auditory or only visual cues by selecting one of two facial expression photographs that matched the expression category represented by the sample. Other videos were edited to contain incongruent sensory cues, i.e., visual features of one expression but auditory features of another. In these cases, subjects were free to select the expression that matched either the auditory or visual modality, whichever was more salient for that expression type. Results showed that chimpanzees were able to discriminate facial expressions using only auditory or visual cues, and when these modalities were mixed. However, in these latter trials, depending on the expression category, clear preferences for either the visual or auditory modality emerged. Pant-hoots and play faces were discriminated preferentially using the auditory modality, while screams were discriminated preferentially using the visual modality. Therefore, depending on the type of expressive display, the auditory and visual modalities were differentially salient in ways that appear consistent with the ethological importance of that display's social function.  相似文献   

4.
Understanding how the human brain integrates features of perceived events calls for the examination of binding processes within and across different modalities and domains. Recent studies of feature-repetition effects have demonstrated interactions between shape, color, and location in the visual modality and between pitch, loudness, and location in the auditory modality: repeating one feature is beneficial if other features are also repeated, but detrimental if not. These partial-repetition costs suggest that co-occurring features are spontaneously bound into temporary event files. Here, we investigated whether these observations can be extended to features from different sensory modalities, combining visual and auditory features in Experiment 1 and auditory and tactile features in Experiment 2. The same types of interactions, as for unimodal feature combinations, were obtained including interactions between stimulus and response features. However, the size of the interactions varied with the particular combination of features, suggesting that the salience of features and the temporal overlap between feature-code activations plays a mediating role.  相似文献   

5.
Stelmach, Herdman, and McNeil (1994) suggested recently that the perceived duration for attended stimuli is shorter than that for unattended ones. In contrast, the attenuation hypothesis (Thomas & Weaver, 1975) suggests the reverse relation between directed attention and perceived duration. We conducted six experiments to test the validity of the two contradictory hypotheses. In all the experiments, attention was directed to one of two possible stimulus sources. Experiments 1 and 2 employed stimulus durations from 70 to 270 msec. A stimulus appeared in either the visual or the auditory modality. Stimuli in the attended modality were rated as longer than stimuli in the unattended modality. Experiment 3 replicated this finding using a different psychophysical procedure. Experiments 4-6 showed that the finding applies not only to stimuli from different sensory modalities but also to stimuli appearing at different locations within the visual field. The results of all six experiments support the assumption that directed attention prolongs the perceived duration of a stimulus.  相似文献   

6.
Participants judged whether two sequential visual events were presented for the same length of time or for different lengths of time, while ignoring two irrelevant sequential sounds. Sounds could be either the same or different in terms of their duration or their pitch. When the visual stimuli were in conflict with the sound stimuli (e.g., visual events were the same, but the sounds were different) performance declined. This was true whether sounds varied in duration or in pitch. The influence of sounds was eliminated when visual duration discriminations were made easier. Together these results demonstrate that resolutions to crossmodal conflicts are flexible across the neural and cognitive architectures. More importantly, they suggest that interactions between modalities can span to abstract levels of same/different representations.  相似文献   

7.
Visual stimuli are often processed more efficiently than accompanying stimuli in another modality. In line with this “visual dominance”, earlier studies on attentional switching showed a clear benefit for visual stimuli in a bimodal visual–auditory modality-switch paradigm that required spatial stimulus localization in the relevant modality. The present study aimed to examine the generality of this visual dominance effect. The modality appropriateness hypothesis proposes that stimuli in different modalities are differentially effectively processed depending on the task dimension, so that processing of visual stimuli is favored in the dimension of space, whereas processing auditory stimuli is favored in the dimension of time. In the present study, we examined this proposition by using a temporal duration judgment in a bimodal visual–auditory switching paradigm. Two experiments demonstrated that crossmodal interference (i.e., temporal stimulus congruence) was larger for visual stimuli than for auditory stimuli, suggesting auditory dominance when performing temporal judgment tasks. However, attention switch costs were larger for the auditory modality than for visual modality, indicating a dissociation of the mechanisms underlying crossmodal competition in stimulus processing and modality-specific biasing of attentional set.  相似文献   

8.
本研究使用空间任务-转换范式,控制视、听刺激的突显性,探讨自下而上注意对视觉主导效应的影响。结果表明视、听刺激突显性显著地影响视觉主导效应,实验一中当听觉刺激为高突显性时,视觉主导效应显著减弱。实验二中当听觉刺激为高突显性并且视觉刺激为低突显性时,视觉主导效应进一步减弱但依然存在。结果支持偏向竞争理论,在跨通道视听交互过程中视觉刺激更具突显性,在多感觉整合过程中更具加工优势。  相似文献   

9.
This experiment investigated the effect of modality on temporal discrimination in children aged 5 and 8 years and adults using a bisection task with visual and auditory stimuli ranging from 200 to 800 ms. In the first session, participants were required to compare stimulus durations with standard durations presented in the same modality (within-modality session), and in the second session in different modalities (cross-modal session). Psychophysical functions were orderly in all age groups, with the proportion of long responses (judgement that a duration was more similar to the long than to the short standard) increasing with the stimulus duration, although functions were flatter in the 5-year-olds than in the 8-year-olds and adults. Auditory stimuli were judged to be longer than visual stimuli in all age groups. The statistical results and a theoretical model suggested that this modality effect was due to differences in the pacemaker speed of the internal clock. The 5-year-olds also judged visual stimuli as more variable than auditory ones, indicating that their temporal sensitivity was lower in the visual than in the auditory modality.  相似文献   

10.
A common assumption in the working memory literature is that the visual and auditory modalities have separate and independent memory stores. Recent evidence on visual working memory has suggested that resources are shared between representations, and that the precision of representations sets the limit for memory performance. We tested whether memory resources are also shared across sensory modalities. Memory precision for two visual (spatial frequency and orientation) and two auditory (pitch and tone duration) features was measured separately for each feature and for all possible feature combinations. Thus, only the memory load was varied, from one to four features, while keeping the stimuli similar. In Experiment 1, two gratings and two tones—both containing two varying features—were presented simultaneously. In Experiment 2, two gratings and two tones—each containing only one varying feature—were presented sequentially. The memory precision (delayed discrimination threshold) for a single feature was close to the perceptual threshold. However, as the number of features to be remembered was increased, the discrimination thresholds increased more than twofold. Importantly, the decrease in memory precision did not depend on the modality of the other feature(s), or on whether the features were in the same or in separate objects. Hence, simultaneously storing one visual and one auditory feature had an effect on memory precision equal to those of simultaneously storing two visual or two auditory features. The results show that working memory is limited by the precision of the stored representations, and that working memory can be described as a resource pool that is shared across modalities.  相似文献   

11.
In order to perceive the world coherently, we need to integrate features of objects and events that are presented to our senses. Here we investigated the temporal limit of integration in unimodal visual and auditory as well as crossmodal auditory-visual conditions. Participants were presented with alternating visual and auditory stimuli and were asked to match them either within or between modalities. At alternation rates of about 4 Hz and higher, participants were no longer able to match visual and auditory stimuli across modalities correctly, while matching within either modality showed higher temporal limits. Manipulating different temporal stimulus characteristics (stimulus offsets and/or auditory-visual SOAs) did not change performance. Interestingly, the difference in temporal limits between crossmodal and unimodal conditions appears strikingly similar to temporal limit differences between unimodal conditions when additional features have to be integrated. We suggest that adding a modality across which sensory input is integrated has the same effect as adding an extra feature to be integrated within a single modality.  相似文献   

12.
Crossmodal selective attention was investigated in a cued task switching paradigm using bimodal visual and auditory stimulation. A cue indicated the imperative modality. Three levels of spatial S–R associations were established following perceptual (location), structural (numerical), and conceptual (verbal) set-level compatibility. In Experiment 1, participants switched attention between the auditory and visual modality either with a spatial-location or spatial-numerical stimulus set. In the spatial-location set, participants performed a localization judgment on left vs. right presented stimuli, whereas the spatial-numerical set required a magnitude judgment about a visually or auditorily presented number word. Single-modality blocks with unimodal stimuli were included as a control condition. In Experiment 2, the spatial-numerical stimulus set was replaced by a spatial-verbal stimulus set using direction words (e.g., “left”). RT data showed modality switch costs, which were asymmetric across modalities in the spatial-numerical and spatial-verbal stimulus set (i.e., larger for auditory than for visual stimuli), and congruency effects, which were asymmetric primarily in the spatial-location stimulus set (i.e., larger for auditory than for visual stimuli). This pattern of effects suggests task-dependent visual dominance.  相似文献   

13.
Auditory stimuli usually have longer subjective durations than visual ones for the same real duration, although performance on many timing tasks is similar in form with different modalities. One suggestion is that auditory and visual stimuli are initially timed by different mechanisms, but later converted into some common duration code which is amodal. The present study investigated this using a temporal generalization interference paradigm. In test blocks, people decided whether comparison durations were or were not a 400-ms standard on average. Test blocks alternated with interference blocks where durations were systematically shorter or longer than in test blocks, and interference was found, in the direction of the durations in the interference blocks, even when the interfering blocks used stimuli in a different modality from the test block. This provides what may be the first direct experimental evidence for a “common code” for durations initially presented in different modalities at some level of the human timing system.  相似文献   

14.
The iambic-trochaic law has been proposed to account for the grouping of auditory stimuli: Sequences of sounds that differ only in duration are grouped as iambs (i.e., the most prominent element marks the end of a sequence of sounds), and sequences that differ only in pitch or intensity are grouped as trochees (i.e., the most prominent element marks the beginning of a sequence). In 3 experiments, comprising a familiarization and a test phase, we investigated whether a similar grouping principle is also present in the visual modality. During familiarization, sequences of visual stimuli were repeatedly presented to participants, who were asked to memorize their order of presentation. In the test phase, participants were better at remembering fragments of the familiarization sequences that were consistent with the iambic-trochaic law. Thus, they were better at remembering fragments that had the element with longer duration in final position (iambs) and fragments that had the element with either higher temporal frequency or higher intensity in initial position (trochees), as compared with fragments that were inconsistent with the iambic-trochaic law or that never occurred during familiarization.  相似文献   

15.
It has been proposed that the perception of very short duration is governed by sensory mechanisms, whereas the perception of longer duration depends on cognitive capacities. Four duration discrimination tasks (modalities: visual, auditory; base duration: 100 ms, 1000 ms) were used to study the relation between time perception, age, sex, and cognitive abilities (alertness, visual and verbal working memory, general fluid reasoning) in 100 subjects aged between 21 and 84 years. Temporal acuity was higher (Weber fractions are lower) for longer stimuli and for the auditory modality. Age was related to the visual 100 ms condition only, with lower temporal acuity in elder participants. Alertness was significantly related to auditory and visual Weber fractions for shorter stimuli only. Additionally, visual working memory was a significant predictor for shorter visual stimuli. These results indicate that alertness, but also working memory, are associated with temporal discrimination of very brief duration.  相似文献   

16.
Four experiments examined judgements of the duration of auditory and visual stimuli. Two used a bisection method, and two used verbal estimation. Auditory/visual differences were found when durations of auditory and visual stimuli were explicitly compared and when durations from both modalities were mixed in partition bisection. Differences in verbal estimation were also found both when people received a single modality and when they received both. In all cases, the auditory stimuli appeared longer than the visual stimuli, and the effect was greater at longer stimulus durations, consistent with a “pacemaker speed” interpretation of the effect. Results suggested that Penney, Gibbon, and Meck's (2000) “memory mixing” account of auditory/visual differences in duration judgements, while correct in some circumstances, was incomplete, and that in some cases people were basing their judgements on some preexisting temporal standard.  相似文献   

17.
Humans have a strong tendency to spontaneously group visual or auditory stimuli together in larger patterns. One of these perceptual grouping biases is formulated as the iambic/trochaic law, where humans group successive tones alternating in pitch and intensity as trochees (high–low and loud–soft) and alternating in duration as iambs (short–long). The grouping of alternations in pitch and intensity into trochees is a human universal and is also present in one non-human animal species, rats. The perceptual grouping of sounds alternating in duration seems to be affected by native language in humans and has so far not been found among animals. In the current study, we explore to which extent these perceptual biases are present in a songbird, the zebra finch. Zebra finches were trained to discriminate between short strings of pure tones organized as iambs and as trochees. One group received tones that alternated in pitch, a second group heard tones alternating in duration, and for a third group, tones alternated in intensity. Those zebra finches that showed sustained correct discrimination were next tested with longer, ambiguous strings of alternating sounds. The zebra finches in the pitch condition categorized ambiguous strings of alternating tones as trochees, similar to humans. However, most of the zebra finches in the duration and intensity condition did not learn to discriminate between training stimuli organized as iambs and trochees. This study shows that the perceptual bias to group tones alternating in pitch as trochees is not specific to humans and rats, but may be more widespread among animals.  相似文献   

18.
We examined the role of Pavlovian and operant relations in behavioral momentum by arranging response-contingent alternative reinforcement in one component of a three-component multiple concurrent schedule with rats. This permitted the simultaneous arranging of different response-reinforcer (operant) and stimulus-reinforcer (Pavlovian) contingencies during three baseline conditions. Auditory or visual stimuli were used as discriminative stimuli within the multiple concurrent schedules. Resistance to change of a target response was assessed during a single session of extinction following each baseline condition. The rate of the target response during baseline varied inversely with the rate of response-contingent reinforcement derived from a concurrent source, regardless of whether the discriminative stimuli were auditory or visual. Resistance to change of the target response, however, did depend on the discriminative-stimulus modality. Resistance to change in the presence of visual stimuli was a positive function of the Pavlovian contingencies, whereas resistance to change was unrelated to either the operant or Pavlovian contingencies when the discriminative stimuli were auditory. Stimulus salience may be a factor in determining the differences in resistance to change across sensory modalities.  相似文献   

19.
The effects of signal modality on duration classification in college students were studied with the duration bisection task. When auditory and visual signals were presented in the same test session and shared common anchor durations, visual signals were classified as shorter than equivalent duration auditory signals. This occurred when auditory and visual signals were presented sequentially in the same test session and when presented simultaneously but asynchronously. Presentation of a single modality signal within a test session, or both modalities but with different anchor durations did not result in classification differences. The authors posit a model in which auditory and visual signals drive an internal clock at different rates. The clock rate difference is due to an attentional effect on the mode switch and is revealed only when the memories for the short and long anchor durations consist of a mix of contributions from accumulations generated by both the fast auditory and slower visual clock rates. When this occurs auditory signals seem longer than visual signals relative to the composite memory representation.  相似文献   

20.
A perception of coherent motion can be obtained in an otherwise ambiguous or illusory visual display by directing one's attention to a feature and tracking it. We demonstrate an analogous auditory effect in two separate sets of experiments. The temporal dynamics associated with the attention-dependent auditory motion closely matched those previously reported for attention-based visual motion. Since attention-based motion mechanisms appear to exist in both modalities, we also tested for multimodal (audiovisual) attention-based motion, using stimuli composed of interleaved visual and auditory cues. Although subjects were able to track a trajectory using cues from both modalities, no one spontaneously perceived "multimodal motion" across both visual and auditory cues. Rather, they reported motion perception only within each modality, thereby revealing a spatiotemporal limit on putative cross-modal motion integration. Together, results from these experiments demonstrate the existence of attention-based motion in audition, extending current theories of attention-based mechanisms from visual to auditory systems.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号