首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
2.
A conversation is made up of visual and auditory signals in a complex flow of events. What is the relative importance of these components for young children's ability to maintain attention on a conversation? In the present set of experiments the visual and auditory signals were disentangled in four filmed events. The visual events were either accompanied by the speech sounds of the conversation or by matched motor sounds and the auditory events by either the natural visual turn taking of the conversation or a matched turn taking of toy trucks. A cornea-reflection technique was used to record the gaze-pattern of subjects while they were looking at the films. Three age groups of typically developing children were studied; 6-month-olds, 1-year-olds and 3-year-olds. The results show that the children are more attracted by the social component of the conversation independent of the kind of sound used. Older children find spoken language more interesting than motor sound. Children look longer at the speaking agent when humans maintain the conversation. The study revealed that children are more attracted to the mouth than to the eyes area. The ability to make more predictive gaze shifts develops gradually over age.  相似文献   

3.
We examined the influence of age and emotionality of auditory stimuli on long‐term memory for environmental sound events. Sixty children aged 7–11 years were presented with two environmental sound events: an emotional car crash and a neutral event, someone brushing their teeth. The sound events comprised six individual environmental sounds, and the participants passively listened to the sound events through a headset. After a two‐week delay, participants performed a cued recall task and a recognition task. Independent of age, children were notably poor at recalling the sound events. Children recalled and recognized significantly more sounds from the emotional sound event than the neutral sound event. Additionally, the older children performed the recall task better than the younger children. The present findings confirm and expand the previously reported superiority of emotional material in memory.Copyright © 2016 John Wiley & Sons, Ltd.  相似文献   

4.
Using virtual reality techniques we created a virtual room within which participants could orient themselves by means of a head-mounted display. Participants were required to search for a nonimmediately visually available object attached to different parts of the virtual room's walls. The search could be guided by a light and/or a sound emitted by the object. When the object was found participants engaged it with a sighting circle. The time taken by participants to initiate the search and to engage the target object was measured. Results from three experiments suggest that (1) advantages in starting the search, finding, and engaging the object were found when the object emitted both light and sound; (2) these advantages disappeared when the visual and auditory information emitted by the object was separated in time by more than 150 ms; (3) misleading visual information determined a greater level of interference than misleading auditory information (e.g., sound from one part of the room, light from the object).  相似文献   

5.
Audiovisual phenomenal causality   总被引:1,自引:0,他引:1  
We report three experiments in which visual or audiovisual displays depicted a surface (target) set into motion shortly after one or more events occurred. A visual motion was used as an initial event, followed directly either by the target motion or by one of three marker events: a collision sound, a blink of the target stimulus, or the blink together with the sound. The delay between the initial event and the onset of the target motion was varied systematically. The subjects had to rate the degree of perceived causality between these events. The results of the first experiment showed a systematic decline of causality judgments with an increasing time delay. Causality judgments increased when additional auditory or visual information marked the onset of the target motion. Visual blinks of the target and auditory clacks produced similar causality judgments. The second experiment tested several models of audiovisual causal processing by varying the position of the sound within the visual delay period. No systematic effect of the sound position occurred. The third experiment showed a subjective shortening of delays filled by a clack sound, as compared with unfilled delays. However, this shortening cannot fully explain the increased tolerance for delays containing the clack sound. Taken together, the results are consistent with the interpretation that the main source of the causality judgments in our experiments is the impression of a plausible unitary event and that perfect synchrony is not necessary in this case.  相似文献   

6.
听觉掩蔽的机制是解决嘈杂声学环境下对特定目标声音进行加工, 即“鸡尾酒会”问题的关键。听觉掩蔽可以分为能量掩蔽和信息掩蔽两种类型。前者是因为目标和掩蔽声音在听觉外周在时间和频率上的重叠所造成的, 而后者被认为是掩蔽声音在听觉中枢和目标声音竞争加工资源所造成的。长久以来, 信息掩蔽一直被当作一种单一成分的现象, 这一概念框架已经成为制约对其机制进行深入研究的一个瓶颈。信息掩蔽中至少包含有知觉信息掩蔽和认知信息掩蔽两种亚成分, 它们源于不同的中枢机制。多个说话人声音掩蔽的条件下, 总体掩蔽量是能量掩蔽、知觉信息掩蔽和认知信息掩蔽等成分总和的结果。操纵掩蔽和目标声音的知觉空间分离、掩蔽声音的可懂度以及掩蔽和目标声音的知觉相似度, 可以实现对两种信息掩蔽亚成分的双重分离。应用功能性核磁共振成像技术可以发现两者有不同的神经机制。  相似文献   

7.
This article discusses two experiments on the discrimination of time intervals presented in sequences marked by brief auditory signals. Participants had to indicate whether the last interval in a series of three intervals marked by four auditory signals was shorter or longer than the previous intervals. Three base durations were under investigation: 75, 150, and 225 ms. In Experiment 1, sounds were presented through headphones, from a single-speaker in front of the participants or by four equally spaced speakers. In all three presentation modes, the highest different threshold was obtained in the lower base duration condition (75 ms), thus indicating an impairment of temporal processing when sounds are presented too rapidly. The results also indicate the presence, in each presentation mode, of a 'time-shrinking effect' (i.e., with the last interval being perceived as briefer than the preceding ones) at 75 ms, but not at 225 ms. Lastly, using different sound sources to mark time did not significantly impair discrimination. In Experiment 2, three signals were presented from the same source, and the last signal was presented at one of two locations, either close or far. The perceived duration was not influenced by the location of the fourth signal when the participant knew before each trial where the sounds would be delivered. However, when the participant was uncertain as to its location, more space between markers resulted in longer perceived duration, a finding that applies only at 150 and 225 ms. Moreover, the perceived duration was affected by the direction of the sequences (left-right vs. right-left).  相似文献   

8.
Although our subjective experience of the world is one of discrete sound sources, the individual frequency components that make up these separate sources are spread across the frequency spectrum. Listeners. use various simple cues, including common onset time and harmonicity, to help them achieve this perceptual separation. Our ability to use harmonicity to segregate two simultaneous sound sources is constrained by the frequency resolution of the auditory system, and is much more effective for low-numbered, resolved harmonics than for higher-numbered, unresolved ones. Our ability to use interaural time-differences (ITDs) in perceptual segregation poses a paradox. Although ITDs are the dominant cue for the localization of complex sounds, listeners cannot use ITDs alone to segregate the speech of a single talker from similar simultaneous sounds. Listeners are, however, very good at using ITD to track a particular sound source across time. This difference might reflect two different levels of auditory processing, indicating that listeners attend to grouped auditory objects rather than to those frequencies that share a common ITD.  相似文献   

9.
In three experiments, listeners were required to either localize or identify the second of two successive sounds. The first sound (the cue) and the second sound (the target) could originate from either the same or different locations, and the interval between the onsets of the two sounds (Stimulus Onset Asynchrony, SOA) was varied. Sounds were presented out of visual range at 135 azimuth left or right. In Experiment 1, localization responses were made more quickly at 100 ms SOA when the target sounded from the same location as the cue (i.e., a facilitative effect), and at 700 ms SOA when the target and cue sounded from different locations (i.e., an inhibitory effect). In Experiments 2 and 3, listeners were required to monitor visual information presented directly in front of them at the same time as the auditory cue and target were presented behind them. These two experiments differed in that in order to perform the visual task accurately in Experiment 3, eye movements to visual stimuli were required. In both experiments, a transition from facilitation at a brief SOA to inhibition at a longer SOA was observed for the auditory task. Taken together these results suggest that location-based auditory IOR is not dependent on either eye movements or saccade programming to sound locations.  相似文献   

10.
When you are looking for an object, does hearing its characteristic sound make you find it more quickly? Our recent results supported this possibility by demonstrating that when a cat target, for example, was presented among other objects, a simultaneously presented “meow” sound (containing no spatial information) reduced the manual response time for visual localization of the target. To extend these results, we determined how rapidly an object-specific auditory signal can facilitate target detection in visual search. On each trial, participants fixated a specified target object as quickly as possible. The target’s characteristic sound speeded the saccadic search time within 215–220 msec and also guided the initial saccade toward the target, compared with presentation of a distractor’s sound or with no sound. These results suggest that object-based auditory—visual interactions rapidly increase the target object’s salience in visual search.  相似文献   

11.
Jeesun Kim 《Visual cognition》2013,21(7):1017-1033
The study examined the effect that auditory information (speaker language/accent: Japanese or French) had on the processing of visual information (the speaker's race: Asian or Caucasian) in two forced-choice tasks: Classification and perceptual judgement on animated talking characters. Two (male and female) sets of facial morphs were constructed such that a 3-D head of Caucasian appearance was gradually morphed (in 11 steps) into one of Asian appearance. Each facial morph was animated in association with spoken French/Japanese or English with a French/Japanese accent. To examine the auditory effect, each animation was played with or without sound. Experiment 1 used an Asian or Caucasian classification task. Results showed that faces heard in conjunction with Japanese or a Japanese accent were more likely to be classified as Asian compared to those presented without sound. Experiment 2 used a same or different judgement task. Results showed that accuracy was improved by hearing a Japanese accent compared to without sound. These results were discussed in terms of the voice information acting as a cue to assist in organizing and attending to face features.  相似文献   

12.
采用双任务范式探讨当听觉节律刺激序列以较慢速度呈现时,其诱导产生的时间期待效应是否受到同时进行的视觉工作记忆任务的影响。结果发现,无论目标刺激是呈现在听觉通道还是视觉通道,双任务和单任务条件下目标刺激出现在规律听觉刺激序列之后被试的反应时均快于目标出现在非规律听觉刺激序列之后,即节律性刺激序列诱导产生的时间期待效应不受工作记忆任务的影响。该结果表明节律性时间期待效应不受注意控制的影响。  相似文献   

13.
Two experiments examine the effect on an immediate recall test of simulating a reverberant auditory environment in which auditory distracters in the form of speech are played to the participants (the ‘irrelevant sound effect’). An echo‐intensive environment simulated by the addition of reverberation to the speech reduced the extent of ‘changes in state’ in the irrelevant speech stream by smoothing the profile of the waveform. In both experiments, the reverberant auditory environment produced significantly smaller irrelevant sound distraction effects than an echo‐free environment. Results are interpreted in terms of changing‐state hypothesis, which states that acoustic content of irrelevant sound, rather than phonology or semantics, determines the extent of the irrelevant sound effect (ISE). Copyright © 2007 John Wiley & Sons, Ltd.  相似文献   

14.
It is well known that discrepancies in the location of synchronized auditory and visual events can lead to mislocalizations of the auditory source, so-called ventriloquism. In two experiments, we tested whether such cross-modal influences on auditory localization depend on deliberate visual attention to the biasing visual event. In Experiment 1, subjects pointed to the apparent source of sounds in the presence or absence of a synchronous peripheral flash. They also monitored for target visual events, either at the location of the peripheral flash or in a central location. Auditory localization was attracted toward the synchronous peripheral flash, but this was unaffected by where deliberate visual attention was directed in the monitoring task. In Experiment 2, bilateral flashes were presented in synchrony with each sound, to provide competing visual attractors. When these visual events were equally salient on the two sides, auditory localization was unaffected by which side subjects monitored for visual targets. When one flash was larger than the other, auditory localization was slightly but reliably attracted toward it, but again regardless of where visual monitoring was required. We conclude that ventriloquism largely reflects automatic sensory interactions, with little or no role for deliberate spatial attention.  相似文献   

15.
Involuntary listening aids seeing: evidence from human electrophysiology   总被引:3,自引:0,他引:3  
It is well known that sensory events of one modality can influence judgments of sensory events in other modalities. For example, people respond more quickly to a target appearing at the location of a previous cue than to a target appearing at another location, even when the two stimuli are from different modalities. Such cross-modal interactions suggest that involuntary spatial attention mechanisms are not entirely modality-specific. In the present study, event-related brain potentials (ERPs) were recorded to elucidate the neural basis and timing of involuntary, cross-modal spatial attention effects. We found that orienting spatial attention to an irrelevant sound modulates the ERP to a subsequent visual target over modality-specific, extrastriate visual cortex, but only after the initial stages of sensory processing are completed. These findings are consistent with the proposal that involuntary spatial attention orienting to auditory and visual stimuli involves shared, or at least linked, brain mechanisms.  相似文献   

16.
Vatakis, A. and Spence, C. (in press) [Crossmodal binding: Evaluating the 'unity assumption' using audiovisual speech stimuli. Perception &Psychophysics] recently demonstrated that when two briefly presented speech signals (one auditory and the other visual) refer to the same audiovisual speech event, people find it harder to judge their temporal order than when they refer to different speech events. Vatakis and Spence argued that the 'unity assumption' facilitated crossmodal binding on the former (matching) trials by means of a process of temporal ventriloquism. In the present study, we investigated whether the 'unity assumption' would also affect the binding of non-speech stimuli (video clips of object action or musical notes). The auditory and visual stimuli were presented at a range of stimulus onset asynchronies (SOAs) using the method of constant stimuli. Participants made unspeeded temporal order judgments (TOJs) regarding which modality stream had been presented first. The auditory and visual musical and object action stimuli were either matched (e.g., the sight of a note being played on a piano together with the corresponding sound) or else mismatched (e.g., the sight of a note being played on a piano together with the sound of a guitar string being plucked). However, in contrast to the results of Vatakis and Spence's recent speech study, no significant difference in the accuracy of temporal discrimination performance for the matched versus mismatched video clips was observed. Reasons for this discrepancy are discussed.  相似文献   

17.
The neural mechanisms underlying the perception of pitch, a sensory attribute of paramount importance in hearing, have been a matter of debate for over a century. A question currently at the heart of the debate is whether the pitch of all harmonic complex tones can be determined by the auditory system's using a single mechanism, or whether two different neural mechanisms are involved, depending on the stimulus conditions. When the harmonics are widely spaced, as is the case at high fundamental frequencies (FOs), and/or when the frequencies of the harmonics are low, the frequency components of the sound fall in different peripheral auditory channels and are then "resolved" by the peripheral auditory system. In contrast, at low F0s, or when the harmonics are high in frequency, several harmonics interact within the passbands of the same auditory filters, being thus "unresolved" by the peripheral auditory system. The idea that more than one mechanism mediates the encoding of pitch depending on the resolvability status of the harmonics was investigated here by testing for transfer of learning in F0 discrimination between different stimulus conditions involving either resolved or unresolved harmonics after specific training in one of these conditions. The results, which show some resolvability-specificity of F0-discrimination learning, support the hypothesis that two different underlying mechanisms mediate the encoding of the F0 of resolved and unresolved harmonics.  相似文献   

18.
Auditory saltation is a misperception of the spatial location of repetitive, transient stimuli. It arises when clicks at one location are followed in perfect temporal cadence by identical clicks at a second location. This report describes two psychophysical experiments designed to examine the sensitivity of auditory saltation to different stimulus cues for auditory spatial perception. Experiment 1 was a dichotic study in which six different six-click train stimuli were used to generate the saltation effect. Clicks lateralised by using interaural time differences and clicks lateralised by using interaural level differences produced equivalent saltation effects, confirming an earlier finding. Switching the stimulus cue from an interaural time difference to an interaural level difference (or the reverse) in mid train was inconsequential to the saltation illusion. Experiment 2 was a free-field study in which subjects rated the illusory motion generated by clicks emitted from two sound sources symmetrically disposed around the interaural axis, ie on the same cone of confusion in the auditory hemifield opposite one ear. Stimuli in such positions produce spatial location judgments that are based more heavily on monaural spectral information than on binaural computations. The free-field stimuli produced robust saltation. The data from both experiments are consistent with the view that auditory saltation can emerge from spatial processing, irrespective of the stimulus cue information used to determine click laterality or location.  相似文献   

19.
Identifying how memories are organized remains a fundamental issue in psychology. Previous work has shown that visual short-term memory is organized according to the object of origin, with participants being better at retrieving multiple pieces of information from the same object than from different objects. However, it is not yet clear whether similar memory structures are employed for other modalities, such as audition. Under analogous conditions in the auditory domain, we found that short-term memories for sound can also be organized according to object, with a same-object advantage being demonstrated for the retrieval of information in an auditory scene defined by two complex sounds overlapping in both space and time. Our results provide support for the notion of an auditory object, in addition to the continued identification of similar processing constraints across visual and auditory domains. The identification of modality-independent organizational principles of memory, such as object-based coding, suggests possible mechanisms by which the human processing system remembers multimodal experiences.  相似文献   

20.
Because the environment often includes multiple sounds that overlap in time, listeners must segregate a sound of interest (the auditory figure) from other co-occurring sounds (the unattended auditory ground). We conducted a series of experiments to clarify the principles governing the extraction of auditory figures. We distinguish between auditory "objects" (relatively punctate events, such as a dog's bark) and auditory "streams" (sounds involving a pattern over time, such as a galloping rhythm). In Experiments 1 and 2, on each trial 2 sounds-an object (a vowel) and a stream (a series of tones)-were presented with 1 target feature that could be perceptually grouped with either source. In each block of these experiments, listeners were required to attend to 1 of the 2 sounds, and report its perceived category. Across several experimental manipulations, listeners were more likely to allocate the feature to an impoverished object if the result of the grouping was a good, identifiable object. Perception of objects was quite sensitive to feature variation (noise masking), whereas perception of streams was more robust to feature variation. In Experiment 3, the number of sound sources competing for the feature was increased to 3. This produced a shift toward relying more on spatial cues than on the potential contribution of the feature to an object's perceptual quality. The results support a distinction between auditory objects and streams, and provide new information about the way that the auditory world is parsed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号