首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The iambic-trochaic law has been proposed to account for the grouping of auditory stimuli: Sequences of sounds that differ only in duration are grouped as iambs (i.e., the most prominent element marks the end of a sequence of sounds), and sequences that differ only in pitch or intensity are grouped as trochees (i.e., the most prominent element marks the beginning of a sequence). In 3 experiments, comprising a familiarization and a test phase, we investigated whether a similar grouping principle is also present in the visual modality. During familiarization, sequences of visual stimuli were repeatedly presented to participants, who were asked to memorize their order of presentation. In the test phase, participants were better at remembering fragments of the familiarization sequences that were consistent with the iambic-trochaic law. Thus, they were better at remembering fragments that had the element with longer duration in final position (iambs) and fragments that had the element with either higher temporal frequency or higher intensity in initial position (trochees), as compared with fragments that were inconsistent with the iambic-trochaic law or that never occurred during familiarization.  相似文献   

2.
We investigated the discrimination of two neighboring intra- or inter-modal empty time intervals marked by three successive stimuli. Each of the three markers was a flash (visual—V) or a sound (auditory—A). The first and last markers were of the same modality, while the second one was either A or V, resulting in four conditions: VVV, VAV, AVA and AAA. Participants judged whether the second interval, whose duration was systematically varied, was shorter or longer than the 500-ms first interval. Compared with VVV and AAA, discrimination was impaired with VAV, but not so much with AVA (in Experiment 1). Whereas VAV and AVA consisted of the same set of single intermodal intervals (VA and AV), discrimination was impaired in the VAV compared to the AVA condition. This difference between VAV and AVA could not be attributed to the participants' strategy to perform the discrimination task, e.g., ignoring the standard interval or replacing the visual stimuli with sounds in their mind (in Experiment 2). These results are discussed in terms of sequential grouping according to sensory similarity.  相似文献   

3.
Contrasting results in visual and auditory working memory studies suggest that the mechanisms of association between location and identity of stimuli depend on the sensory modality of the input. In this auditory study, we tested whether the association of two features both encoded in the “what” stream is different from the association between a “what” and a “where” feature. In an old–new recognition task, blindfolded participants were presented with sequences of sounds varying in timbre, pitch and location. They were required to judge if either the timbre, pitch or location of a single-probe stimulus was identical or different to the timbre, pitch or location of one of the sounds of the previous sequence. Only variations in one of the three features were relevant for the task, whereas the other two features could vary, with task-irrelevant changes. Results showed that task-irrelevant variations in the “what” features (either timbre or pitch) caused an impaired recognition of sound location and in the other task-relevant “what” feature, whereas changes in sound location did not affect the recognition of either one of the “what” features. We conclude that the identity of sounds is incidentally processed even when not required by the task, whereas sound location is not maintained when task irrelevant.  相似文献   

4.
Studies using operant training have demonstrated that laboratory animals can discriminate the number of objects or events based on either auditory or visual stimuli, as well as the integration of both auditory and visual modalities. To date, studies of spontaneous number discrimination in untrained animals have been restricted to the visual modality, leaving open the question of whether such capacities generalize to other modalities such as audition. To explore the capacity to spontaneously discriminate number based on auditory stimuli, and to assess the abstractness of the representation underlying this capacity, a habituation-discrimination procedure involving speech and pure tones was used with a colony of cotton-top tamarins. In the habituation phase, we presented subjects with either two- or three-speech syllable sequences that varied with respect to overall duration, inter-syllable duration, and pitch. In the test phase, we presented subjects with a counterbalanced order of either two- or three-tone sequences that also varied with respect to overall duration, inter-syllable duration, and pitch. The proportion of looking responses to test stimuli differing in number was significantly greater than to test stimuli consisting of the same number. Combined with earlier work, these results show that at least one non-human primate species can spontaneously discriminate number in both the visual and auditory domain, indicating that this capacity is not tied to a particular modality, and within a modality, can accommodate differences in format.  相似文献   

5.
Ninety-six infants of 3 1/2 months were tested in an infant-control habituation procedure to determine whether they could detect three types of audio-visual relations in the same events. The events portrayed two amodal invariant relations, temporal synchrony and temporal microstructure specifying the composition of the objects, and one modality-specific relation, that between the pitch of the sound and the color/shape of the objects. Subjects were habituated to two events accompanied by their natural, synchronous, and appropriate sounds and then received test trials in which the relation between the visual and the acoustic information was changed. Consistent with Gibson's increasing specificity hypothesis, it was expected that infants would differentiate amodal invariant relations prior to detecting arbitrary, modality-specific relations. Results were consistent with this prediction, demonstrating significant visual recovery to a change in temporal synchrony and temporal microstructure, but not to a change in the pitch-color/shape relations. Two subsequent discrimination studies demonstrated that infants' failure to detect the changes in pitch-color/shape relations could not be attributed to an inability to discriminate the pitch or the color/shape changes used in Experiment 1. Infants showed robust discrimination of the contrasts used.  相似文献   

6.
Crossmodal correspondences have often been demonstrated using congruency effects between pairs of stimuli in different sensory modalities that vary along separate dimensions. To date, however, it is still unclear the extent to which these correspondences are relative versus absolute in nature: that is, whether they result from pre-defined values that rigidly link the two dimensions or rather result from flexible values related to the previous occurrence of the crossmodal stimuli. Here, we investigated this issue in a speeded classification task featuring the correspondence between auditory pitch and visual size (e.g., congruent correspondence between high pitch/small disc and low pitch/large disc). Participants classified the size of the visual stimuli (large vs. small) while hearing concurrent high- or low-pitched task-irrelevant sounds. On some trials, visual stimuli were paired instead with “intermediate” pitch, that could be interpreted differently according to the auditory stimulus on the preceding trial (i.e., as “lower” following the presentation of a high pitch tone, but as “higher” following the presentation of a low pitch tone). Performance on sequence-congruent trials (e.g., when a small disc paired with the intermediate-pitched tone was preceded by a low pitch tone) was compared to sequence-incongruent trials (e.g., when a small disc paired with the intermediate-pitch tone was by a high-pitched tone). The results revealed faster classification responses on sequence-congruent than on sequence-incongruent trials. This demonstrates that the effect of the pitch/size correspondence is relative in nature, and subjected to trial-by-trial interpretation of the stimulus pair.  相似文献   

7.
Several lines of evidence suggest that during processing of events, the features of these events become connected via episodic bindings. Such bindings have been demonstrated for a large number of visual and auditory stimulus features, like color and orientation, or pitch and loudness. Importantly, most visual and auditory events typically also involve temporal features, like onset time or duration. So far, however, whether temporal stimulus features are also bound into event representations has never been tested directly. The aim of the present study was to investigate possible binding between stimulus duration and other features of auditory events. In Experiment 1, participants had to respond with two keys to a low or high pitch sinus tone. Critically, the tones were presented with two different presentation durations. Sequential analysis of RT data indicated binding of stimulus duration into the event representation: at pitch repetitions, performance was better when both pitch and duration repeated, relative to when only pitch repeated and duration switched. This finding was replicated with loudness as relevant stimulus feature in Experiment 2. In sum, the results demonstrate that temporal features are bound into auditory event representations. This finding is an important advancement for binding theory in general, and raises several new questions for future research.  相似文献   

8.
Here, we investigate how audiovisual context affects perceived event duration with experiments in which observers reported which of two stimuli they perceived as longer. Target events were visual and/or auditory and could be accompanied by nontargets in the other modality. Our results demonstrate that the temporal information conveyed by irrelevant sounds is automatically used when the brain estimates visual durations but that irrelevant visual information does not affect perceived auditory duration (Experiment 1). We further show that auditory influences on subjective visual durations occur only when the temporal characteristics of the stimuli promote perceptual grouping (Experiments 1 and 2). Placed in the context of scalar expectancy theory of time perception, our third and fourth experiments have the implication that audiovisual context can lead both to changes in the rate of an internal clock and to temporal ventriloquism-like effects on perceived on- and offsets. Finally, intramodal grouping of auditory stimuli diminished any crossmodal effects, suggesting a strong preference for intramodal over crossmodal perceptual grouping (Experiment 5).  相似文献   

9.
Two experiments were performed to determine whether categorization of the pitch of a probe tone is influenced by the pitch of, and response made to, a preceding prime tone. The prime and the probe could be drawn either from a pool of low-frequency sounds or from a pool of high-frequency sounds. The results of both experiments indicated that the performance obtained was best when the prime and the probe were the same pitch (and therefore required the same response), intermediate when the two sounds differed in pitch and required different responses, and slowest when the prime and the probe differed in pitch but required the same response (i.e., they were drawn from the same frequency pool). The results of Experiment 2 revealed in addition that when a repeated response was required, performance declined as the magnitude of the frequency change increased and that responses were made more quickly and accurately if the direction of the frequency change was away from the alternative category than if it was toward the alternative category. The results demonstrate that categorization of sounds by pitch is accomplished with reference to a previous processing episode.  相似文献   

10.
A sound presented in temporal proximity to a light can alter the perceived temporal occurrence of that light (temporal ventriloquism). The authors explored whether spatial discordance between the sound and light affects this phenomenon. Participants made temporal order judgments about which of 2 lights appeared first, while they heard sounds before the 1st and after the 2nd light. Sensitivity was higher (i.e., a lower just noticeable difference) when the sound-light interval was approximately 100 ms rather than approximately 0 ms. This temporal ventriloquist effect was unaffected by whether sounds came from the same or a different position as the lights, whether the sounds were static or moved, or whether they came from the same or opposite sides of fixation. Yet, discordant sounds interfered with speeded visual discrimination. These results challenge the view that intersensory interactions in general require spatial correspondence between the stimuli.  相似文献   

11.
Humans have a strong tendency to spontaneously group visual or auditory stimuli together in larger patterns. One of these perceptual grouping biases is formulated as the iambic/trochaic law, where humans group successive tones alternating in pitch and intensity as trochees (high–low and loud–soft) and alternating in duration as iambs (short–long). The grouping of alternations in pitch and intensity into trochees is a human universal and is also present in one non-human animal species, rats. The perceptual grouping of sounds alternating in duration seems to be affected by native language in humans and has so far not been found among animals. In the current study, we explore to which extent these perceptual biases are present in a songbird, the zebra finch. Zebra finches were trained to discriminate between short strings of pure tones organized as iambs and as trochees. One group received tones that alternated in pitch, a second group heard tones alternating in duration, and for a third group, tones alternated in intensity. Those zebra finches that showed sustained correct discrimination were next tested with longer, ambiguous strings of alternating sounds. The zebra finches in the pitch condition categorized ambiguous strings of alternating tones as trochees, similar to humans. However, most of the zebra finches in the duration and intensity condition did not learn to discriminate between training stimuli organized as iambs and trochees. This study shows that the perceptual bias to group tones alternating in pitch as trochees is not specific to humans and rats, but may be more widespread among animals.  相似文献   

12.
Pitch, the perceptual correlate of fundamental frequency (F0), plays an important role in speech, music, and animal vocalizations. Changes in F0 over time help define musical melodies and speech prosody, while comparisons of simultaneous F0 are important for musical harmony, and for segregating competing sound sources. This study compared listeners' ability to detect differences in F0 between pairs of sequential or simultaneous tones that were filtered into separate, nonoverlapping spectral regions. The timbre differences induced by filtering led to poor F0 discrimination in the sequential, but not the simultaneous, conditions. Temporal overlap of the two tones was not sufficient to produce good performance; instead performance appeared to depend on the two tones being integrated into the same perceptual object. The results confirm the difficulty of comparing the pitches of sequential sounds with different timbres and suggest that, for simultaneous sounds, pitch differences may be detected through a decrease in perceptual fusion rather than an explicit coding and comparison of the underlying F0s.  相似文献   

13.
Music provides a useful domain in which to study how the different attributes of complex multidimensional stimuli are processed both separately and in combination. Much research has been devoted to addressing how the dimension of pitch and time are co-processed in music listening tasks. Neuropsychological studies have provided evidence for a certain degree of independence between pitch and temporal processing, although there are also many experimental reports favouring interactive models of pitch and temporal processing. Here we extended these investigations by examining the processing of pitch and temporal structures when music is presented in the visual modality (i.e. in the form of music notation). In two experiments, musician subjects were presented with visual musical stimuli containing both pitch and temporal information for a brief amount of time, and they were subsequently required to recall both the pitch and temporal information. In Experiment 1, we documented that concurrent, unattended, pitch and rhythmic auditory interference stimuli disrupted the recall of pitch, but not time. In Experiment 2, we showed that manipulating the tonal structure of the visual presentation stimuli affected the recall of pitch, but not time. On the other hand, manipulating the metrical properties of the visual stimuli affected recall of time, and pitch to a certain extent. Taken together, these results suggest that the processing of pitch is constrained by the processing of time, but the processing of time is not affected by the processing of pitch. These results do not support either strong independence or interactive models of pitch and temporal processing, but they suggest that the processing of time can occur independently from the processing of pitch when performing a written recall task.  相似文献   

14.
In many everyday situations, our senses are bombarded by many different unisensory signals at any given time. To gain the most veridical, and least variable, estimate of environmental stimuli/properties, we need to combine the individual noisy unisensory perceptual estimates that refer to the same object, while keeping those estimates belonging to different objects or events separate. How, though, does the brain “know” which stimuli to combine? Traditionally, researchers interested in the crossmodal binding problem have focused on the roles that spatial and temporal factors play in modulating multisensory integration. However, crossmodal correspondences between various unisensory features (such as between auditory pitch and visual size) may provide yet another important means of constraining the crossmodal binding problem. A large body of research now shows that people exhibit consistent crossmodal correspondences between many stimulus features in different sensory modalities. For example, people consistently match high-pitched sounds with small, bright objects that are located high up in space. The literature reviewed here supports the view that crossmodal correspondences need to be considered alongside semantic and spatiotemporal congruency, among the key constraints that help our brains solve the crossmodal binding problem.  相似文献   

15.
Participants made speeded target-nontarget responses to singly presented auditory stimuli in 2 tasks. In within-dimension conditions, participants listened for either of 2 target features taken from the same dimension; in between-dimensions conditions, the target features were taken from different dimensions. Judgments were based on the presence or absence of either target feature. Speech sounds, defined relative to sound identity and locale, were used in Experiment 1, whereas tones, comprising pitch and locale components, were used in Experiments 2 and 3. In all cases, participants performed better when the target features were taken from the same dimension than when they were taken from different dimensions. Data suggest that the auditory and visual systems exhibit the same higher level processing constraints.  相似文献   

16.
The effect of brief auditory stimuli on visual apparent motion   总被引:1,自引:0,他引:1  
Getzmann S 《Perception》2007,36(7):1089-1103
When two discrete stimuli are presented in rapid succession, observers typically report a movement of the lead stimulus toward the lag stimulus. The object of this study was to investigate crossmodal effects of irrelevant sounds on this illusion of visual apparent motion. Observers were presented with two visual stimuli that were temporally separated by interstimulus onset intervals from 0 to 350 ms. After each trial, observers classified their impression of the stimuli using a categorisation system. The presentation of short sounds intervening between the visual stimuli facilitated the impression of apparent motion relative to baseline (visual stimuli without sounds), whereas sounds presented before the first and after the second visual stimulus as well as simultaneously presented sounds reduced the motion impression. The results demonstrate an effect of the temporal structure of irrelevant sounds on visual apparent motion that is discussed in light of a related multisensory phenomenon, 'temporal ventriloquism', on the assumption that sounds can attract lights in the temporal dimension.  相似文献   

17.
In three experiments, listeners were required to either localize or identify the second of two successive sounds. The first sound (the cue) and the second sound (the target) could originate from either the same or different locations, and the interval between the onsets of the two sounds (Stimulus Onset Asynchrony, SOA) was varied. Sounds were presented out of visual range at 135 azimuth left or right. In Experiment 1, localization responses were made more quickly at 100 ms SOA when the target sounded from the same location as the cue (i.e., a facilitative effect), and at 700 ms SOA when the target and cue sounded from different locations (i.e., an inhibitory effect). In Experiments 2 and 3, listeners were required to monitor visual information presented directly in front of them at the same time as the auditory cue and target were presented behind them. These two experiments differed in that in order to perform the visual task accurately in Experiment 3, eye movements to visual stimuli were required. In both experiments, a transition from facilitation at a brief SOA to inhibition at a longer SOA was observed for the auditory task. Taken together these results suggest that location-based auditory IOR is not dependent on either eye movements or saccade programming to sound locations.  相似文献   

18.
Three experiments compared forgetting of the duration of a bar-like visual stimulus with forgetting of its length. The main aim of the experiments was to investigate whether subjective shortening (a decrease in the subjective magnitude of a stimulus as its retention interval increased) was observable in length judgements as well as in time judgements, where subjective shortening has been often observed previously. On all trials of the three experiments, humans received two briefly presented coloured bars, separated by adelay ranging from 1 to 10 s, and the bars could differ in length, duration of presentation, or both. In Experiment 1 two groups of subjects made either length or duration judgements, and subjective shortening-type forgetting functions were observed only for duration. Experiments 2 and 3 used the same general procedure, but the stimuli judged could differ both in length and duration within a trial, and different subject groups (Experiment 2) or the same subjects in two conditions (Experiment 3) made either length or duration judgements of stimuli, which were on average physically identical. Subjective shortening was only found with duration, and never with length, supporting the view that subjective shortening may be unique to time judgements.  相似文献   

19.
The perception of time is heavily influenced by attention and memory, both of which change over the lifespan. In the current study, children (8 yrs), young adults (18–25 yrs), and older adults (60–75 yrs) were tested on a duration bisection procedure using 3 and 6-s auditory and visual signals as anchor durations. During test, participants were exposed to a range of intermediate durations, and the task was to indicate whether test durations were closer to the “short” or “long” anchor. All groups reproduced the classic finding that “sounds are judged longer than lights”. This effect was greater for older adults and children than for young adults, but for different reasons. Replicating previous results, older adults made similar auditory judgments as young adults, but underestimated the duration of visual test stimuli. Children showed the opposite pattern, with similar visual judgments as young adults but overestimation of auditory stimuli. Psychometric functions were analyzed using the Sample Known Exactly-Mixed Memory quantitative model of the Scalar Timing Theory of interval timing. Results indicate that children show an auditory-specific deficit in reference memory for the anchors, rather than a general bias to overestimate time and that aged adults show an exaggerated tendency to judge visual stimuli as “short” due to a reduction in the availability of controlled attention.  相似文献   

20.
Abstract— In a pitch discrimination task, subjects were faster and more accurate in judging low-frequency sounds when these stimuli were presented to the left ear, compared with the right ear. In contrast, a right-ear advantage was found with high-frequency sounds. The effect was in terms of relative frequency and not absolute frequency, suggesting that the effect arisen from pastsensory mechanisms. A simitar laterality effect has been reported in visual perception with stimuli varying in spatial frequency. These multimodal laterality effects may reflect a general computational difference between the two cerebral hemispheres, with the left hemisphere biased for processing high-frequency information and the right hemisphere biased for processing low-frequency information.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号