首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Talkers are recognized more accurately if they are speaking the listeners’ native language rather than an unfamiliar language. This “language familiarity effect” has been shown not to depend upon comprehension and must instead involve language sound patterns. We further examine the level of sound‐pattern processing involved, by comparing talker recognition in foreign languages versus two varieties of English, by (a) English speakers of one variety, (b) English speakers of the other variety, and (c) non‐native listeners (more familiar with one of the varieties). All listener groups performed better with native than foreign speech, but no effect of language variety appeared: Native listeners discriminated talkers equally well in each, with the native variety never outdoing the other variety, and non‐native listeners discriminated talkers equally poorly in each, irrespective of the variety's familiarity. The results suggest that this talker recognition effect rests not on simple familiarity, but on an abstract level of phonological processing.  相似文献   

2.
The role of interaural time difference (ITD) in perceptual grouping and selective attention was explored in 3 experiments. Experiment 1 showed that listeners can use small differences in ITD between 2 sentences to say which of 2 short, constant target words was part of the attended sentence, in the absence of talker or fundamental frequency differences. Experiments 2 and 3 showed that listeners do not explicitly track components that share a common ITD. Their inability to segregate a harmonic from a target vowel by a difference in ITD was not substantially changed by the vowel being placed in a sentence context, where the sentence shared the same ITD as the rest of the vowel. The results indicate that in following a particular auditory sound source over time, listeners attend to perceived auditory objects at particular azimuthal positions rather than attend explicitly to those frequency components that share a common ITD.  相似文献   

3.
Listeners identified spoken words, letters, and numbers and the spatial location of these utterances in three listening conditions as a function of the number of simultaneously presented utterances. The three listening conditions were a normal listening condition, in which the sounds were presented over seven possible loudspeakers to a listener seated in a sound-deadened listening room; a one-headphone listening condition, in which a single microphone that was placed in the listening room delivered the sounds to a single headphone worn by the listener in a remote room; and a stationary KEMAR listening condition, in which binaural recordings from an acoustic manikin placed in the listening room were delivered to a listener in the remote room. The listeners were presented one, two, or three simultaneous utterances. The results show that utterance identification was better in the normal listening condition than in the one-headphone condition, with the KEMAR listening condition yielding intermediate levels of performance. However, the differences between listening in the normal and in the one-headphone conditions were much smaller when two, rather than three, utterances were presented at a time. Localization performance was good for both the normal and the KEMAR listening conditions and at chance for the one-headphone condition. The results suggest that binaural processing is probably more important for solving the “cocktail party” problem when there are more than two concurrent sound sources.  相似文献   

4.
This study was designed to test the hypothesis that systematic variations in listener behavior can have an important influence on both speaker behavior and communicative success among children. In particular, we investigated the idea that systematic variations in listener behavior might have not only within-trial effects on the adequacy of speakers' messages and the accuracy of communication among children, but also cumulative effects on speakers' initial messages across trials. Effects of stimulus complexity were also examined. Pairs of 7- and 9-year-old children participated in a referential communication game, with the younger child serving as speaker and the older one as listener. Half of the listeners were given a plan for effective listening which emphasized the importance of asking questions if the speakers' messages were ambiguous. Replicating earlier findings, the plan manipulation was successful in encouraging listeners to ask questions when necessary. The major result was that listener questions not only had the expected trial-by-trial effect on message adequacy and communicative accuracy, but also showed a cumulative effect on speaker performance. When exposed to systematic listener feedback, speakers improved their initial messages over trials. Stimulus complexity was not a major determinant of performances. These findings suggest that provision of systematic listener feedback may be an effective method for teaching speaker skills to young children.  相似文献   

5.
This paper revisits the conclusion of our previous work regarding the dominance of meaning in the competition between rhythmic parsing and linguistic parsing. We played five-note rhythm patterns in which each sound is a spoken word of a five-word sentence. We asked listeners to indicate the starting point of the rhythm while disregarding which word would normally be heard as the first word of the sentence. In four studies, we varied task demands by introducing differences in rhythm complexity, rhythm ambiguity, rhythm pairing, and semantic coherence. We found that task complexity affects the dominance of meaning. We therefore amend our previous conclusion: when processing resources are taxed, listeners do not always primarily attend to meaning; instead, they primarily attend to the aspect of the pattern (rhythm or meaning) that is more salient.  相似文献   

6.
Because the environment often includes multiple sounds that overlap in time, listeners must segregate a sound of interest (the auditory figure) from other co-occurring sounds (the unattended auditory ground). We conducted a series of experiments to clarify the principles governing the extraction of auditory figures. We distinguish between auditory "objects" (relatively punctate events, such as a dog's bark) and auditory "streams" (sounds involving a pattern over time, such as a galloping rhythm). In Experiments 1 and 2, on each trial 2 sounds-an object (a vowel) and a stream (a series of tones)-were presented with 1 target feature that could be perceptually grouped with either source. In each block of these experiments, listeners were required to attend to 1 of the 2 sounds, and report its perceived category. Across several experimental manipulations, listeners were more likely to allocate the feature to an impoverished object if the result of the grouping was a good, identifiable object. Perception of objects was quite sensitive to feature variation (noise masking), whereas perception of streams was more robust to feature variation. In Experiment 3, the number of sound sources competing for the feature was increased to 3. This produced a shift toward relying more on spatial cues than on the potential contribution of the feature to an object's perceptual quality. The results support a distinction between auditory objects and streams, and provide new information about the way that the auditory world is parsed.  相似文献   

7.
The aim of this study was to examine listener perceptions of an adult male person who stutters (PWS) who did or did not disclose his stuttering. Ninety adults who do not stutter individually viewed one of three videotaped monologues produced by a male speaker with severe stuttering. In one monologue, 30 listeners heard the speaker disclose stuttering at the beginning and in another monologue, 30 listeners heard the speaker disclose stuttering at the end. A third group of 30 listeners viewed a monologue where no disclosure of stuttering occurred. After listeners viewed a monologue, they were asked to rate a set of six Likert scale statements and answer three open-ended questions. The results showed that only one of six Likert statements was significantly different across the three conditions. The only statement that was different was that the speaker was perceived to be significantly more friendly when disclosing stuttering at the end of the monologue than when not disclosing stuttering. There were no significant differences between the percentage of positive and negative comments made by listeners across the three conditions. Listeners' comments to each open-ended question showed they were comfortable listening to stuttering with or without disclosure and slightly more than half of the listeners believed their perceptions of the speaker did not change when he disclosed stuttering. The results also showed that the speaker who disclosed stuttering at the beginning of the monologue received significantly more positive listener comments than when he disclosed stuttering at the end of the monologue. Results are discussed relative to comparisons with the study, the clinical relevance of acknowledging stuttering as a component of treatment, and future research on the self-disclosure of stuttering. Educational objectives: The reader will be able to: (1) describe how different groups of listeners perceive and respond to two conditions of self-disclosure of stuttering and one condition involving non self-disclosure of stuttering; (2) summarize the range of listener responses to and benefits of self-disclosure of stuttering; and (3) describe the value of self-disclosure of stuttering for the listener and the speaker.  相似文献   

8.
Prior knowledge shapes our experiences, but which prior knowledge shapes which experiences? This question is addressed in the domain of music perception. Three experiments were used to determine whether listeners activate specific musical memories during music listening. Each experiment provided listeners with one of two musical contexts that was presented simultaneously with a melody. After a listener was familiarized with melodies embedded in contexts, the listener heard melodies in isolation and judged the fit of a final harmonic or metrical probe event. The probe event matched either the familiar (but absent) context or an unfamiliar context. For both harmonic (Experiments 1 and 3) and metrical (Experiment 2) information, exposure to context shifted listeners' preferences toward a probe matching the context that they had been familiarized with. This suggests that listeners rapidly form specific musical memories without explicit instruction, which are then activated during music listening. These data pose an interesting challenge for models of music perception which implicitly assume that the listener's knowledge base is predominantly schematic or abstract.  相似文献   

9.
Dyads of Ss from six age groups (range 7 to 22 years) were induced to misunderstand each other in each of three conditions: 1. Biased encoding in the speaker. 2. Biased decoding in the listener. J. Biases in both speaker and listener. Young Ss explained faulty behavior and attributed responsibility be reference to listeners, older Ss explained by reference to listeners and attributed responsibility to speakers. An increase of insight into listeners' bias was shown throughout the age span, whereas speakers' bias was not mentioned in Ss' explanations. The results were interpreted in terms of development of differentiation and integration of the social roles of speakers and listeners.  相似文献   

10.
In this paper, we study nonverbal listener responses on a corpus with multiple parallel recorded listeners. These listeners were meant to believe that they were the sole listener, while in fact there were three persons listening to the same speaker. The speaker could only see one of the listeners. We analyze the impact of the particular setup of the corpus on the behavior and perception of the two types of listeners: the listeners that could be seen by the speaker and the listeners that could not be seen. Furthermore, we compare the nonverbal listening behaviors of these three listeners to each other with regard to timing and form. We correlate these behaviors with behaviors of the speaker, like pauses and whether the speaker is looking at the listeners or not.  相似文献   

11.
In this paper, we study nonverbal listener responses on a corpus with multiple parallel recorded listeners. These listeners were meant to believe that they were the sole listener, while in fact there were three persons listening to the same speaker. The speaker could only see one of the listeners. We analyze the impact of the particular setup of the corpus on the behavior and perception of the two types of listeners: the listeners that could be seen by the speaker and the listeners that could not be seen. Furthermore, we compare the nonverbal listening behaviors of these three listeners to each other with regard to timing and form. We correlate these behaviors with behaviors of the speaker, like pauses and whether the speaker is looking at the listeners or not.  相似文献   

12.
The exponential increase of intensity for an approaching sound source provides salient information for a listener to make judgments of time to arrival (TTA). Specifically, a listener will experience a greater rate of increasing intensity for higher than for lower frequencies during a sound source’s approach. To examine the relative importance of this spectral information, listeners were asked to make judgments about the arrival times of nine 1-octave-band sound sources (the bands were consecutive, nonoverlapping single octaves, ranging from 40–80 Hz to ~10–20 kHz). As is typical in TTA tasks, listeners tended to underestimate the arrival time of the approaching sound source. In naturally occurring and independently manipulated amplification curves, bands with center frequencies between 120 and 250 Hz caused the least underestimation, and bands with center frequencies between 2000 and 7500 Hz caused the most underestimation. This spectral influence appears to be related to the greater perceived urgency of higher-frequency sounds.  相似文献   

13.
Binaural and monaural localization of sound in two-dimensional space   总被引:2,自引:0,他引:2  
Two experiments were conducted. In experiment 1, part 1, binaural and monaural localization of sounds originating in the left hemifield was investigated. 104 loudspeakers were arranged in a 13 x 8 matrix with 15 degrees separating adjacent loudspeakers in each column and in each row. In the horizontal plane (HP), the loudspeakers extended from 0 degrees to 180 degrees; in the vertical plane (VP), they extended from -45 degrees to 60 degrees with respect to the interaural axis. Findings of special interest were: (i) binaural listeners identified the VP coordinate of the sound source more accurately than did monaural listeners, and (ii) monaural listeners identified the VP coordinate of the sound source more accurately than its HP coordinate. In part 2, it was found that foreknowledge of the HP coordinate of the sound source aided monaural listeners in identifying its VP coordinate, but the converse did not hold. In experiment 2, part 1, localization performances were evaluated when the sound originated from consecutive 45 degrees segments of the HP, with the VP segments extending from -22.5 degrees to 22.5 degrees. Part 2 consisted of measuring, on the same subjects, head-related transfer functions by means of a miniature microphone placed at the entrance of their external ear canal. From these data, the 'covert' peaks (defined and illustrated in text) of the sound spectrum were extracted. This spectral cue was advanced to explain why monaural listeners in this study as well as in other studies performed better when locating VP-positioned sounds than when locating HP-positioned sounds. It is not claimed that there is inherent advantage for localizing sound in the VP; rather, monaural localization proficiency, whether in the VP or HP, depends on the availability of covert peaks which, in turn, rests on the spatial arrangement of the sound sources.  相似文献   

14.
言语理解是听者接受外部语音输入并且获得意义的心理过程。日常交流中,听觉言语理解受多尺度节律信息的影响,常见有韵律结构节律、语境节律、和说话者身体语言节律三方面外部节律。它们改变听者在言语理解中的音素判别、词汇感知以及言语可懂度等过程。内部节律表现为大脑内神经振荡,其能够表征外部言语输入在不同时间尺度下的层级特征。外部节律性刺激与内部神经活动的神经夹带能够优化大脑对言语刺激的处理,并受到听者自上而下的认知过程的调节进一步增强目标言语的内在表征。我们认为它可能是实现内外节律相互联系并共同影响言语理解的关键机制。对内外节律及其联系机制的揭示能够为理解言语这种在多层级时间尺度上具有结构规律的复杂序列提供了一个研究窗口。  相似文献   

15.
The importance of selecting between a target and a distractor in producing auditory negative priming was examined in three experiments. In Experiment 1, participants were presented with a prime pair of sounds, followed by a probe pair of sounds. For each pair, listeners were to identify the sound presented to the left ear. Under these conditions, participants were especially slow to identify a sound in the probe pair if it had been ignored in the preceding prime pair. Evidence of auditory negative priming was also apparent when the prime sound was presented in isolation to only one ear (Experiment 2) and when the probe target was presented in isolation to one ear (Experiment 3). In addition, the magnitude of the negative priming effect was increased substantially when only a single prime sound was presented. These results suggest that the emergence of auditory negative priming does not depend on selection between simultaneous target and distractor sounds.  相似文献   

16.
《Ecological Psychology》2013,25(2):87-110
Rising acoustic intensity can indicate movement of a sound source toward a listener. Perceptual overestimation of intensity change could provide a selective advantage by indicating that the source is closer than it actually is, providing a better opportunity for the listener to prepare for the source's arrival. In Experiment 1, listeners heard equivalent rising and falling level sounds and indicated whether one demonstrated a greater change in loudness than the other. In 2 subsequent experiments listeners heard equivalent approaching and receding sounds and indicated perceived starting and stopping points of the auditory motion. Results indicate that rising intensity changed in loudness more than equivalent falling intensity, and approaching sounds were perceived as starting and stopping closer than equidistant receding sounds. Both effects were greater for tones than for noise. Evidence is presented that suggests that an asymmetry in the neural coding of egocentric auditory motion is an adaptation that provides advanced warning of looming acoustic sources.  相似文献   

17.
A series of three experiments explored the relationship between 3-year-old children's ability to name target body parts and their untrained matching of target hand-to-body touches. Nine participants, 3 per experiment, were presented with repeated generalized imitation tests in a multiple-baseline procedure, interspersed with step-by-step training that enabled them to (i) tact the target locations on their own and the experimenter's bodies or (ii) respond accurately as listeners to the experimenter's tacts of the target locations. Prompts for on-task naming of target body parts were also provided later in the procedure. In Experiment 1, only tact training followed by listener probes were conducted; in Experiment 2, tacting was trained first and listener behavior second, whereas in Experiment 3 listener training preceded tact training. Both tact and listener training resulted in emergence of naming together with significant and large improvements in the children's matching performances; this was true for each child and across most target gestures. The present series of experiments provides evidence that naming--the most basic form of self-instructional behavior--may be one means of establishing untrained matching as measured in generalized imitation tests. This demonstration has a bearing on our interpretation of imitation reported in the behavior analytic, cognitive developmental, and comparative literature.  相似文献   

18.
In the present experiment, the authors tested Mandarin and English listeners on a range of auditory tasks to investigate whether long-term linguistic experience influences the cognitive processing of nonspeech sounds. As expected, Mandarin listeners identified Mandarin tones significantly more accurately than English listeners; however, performance did not differ across the listener groups on a pitch discrimination task requiring fine-grained discrimination of simple nonspeech sounds. The crucial finding was that cross-language differences emerged on a nonspeech pitch contour identification task: The Mandarin listeners more often misidentified flat and falling pitch contours than the English listeners in a manner that could be related to specific features of the sound structure of Mandarin, which suggests that the effect of linguistic experience extends to nonspeech processing under certain stimulus and task conditions.  相似文献   

19.
In whole report, a sentence presented sequentially at the rate of about 10 words/s can be recalled accurately, whereas if the task is to report only two target words (e.g., red words), the second target suffers an attentional blink if it appears shortly after the first target. If these two tasks are carried out simultaneously, is there an attentional blink, and does it affect both tasks? Here, sentence report was combined with report of two target words (Experiments 1 and 2) or two inserted target digits, Arabic numerals or word digits (Experiments 3 and 4). When participants reported only the targets an attentional blink was always observed. When they reported both the sentence and targets, sentence report was quite accurate but there was an attentional blink in picking out the targets when they were part of the sentence. When targets were extra digits inserted in the sentence there was no blink when viewers also reported the sentence. These results challenge some theories of the attentional blink: Blinks result from online selection, not perception or memory.  相似文献   

20.
Subjects (average age 21 years, recruited by personal contact and through a school) were presented with a spoken sentence on tape and then heard six speakers of the same sex, including the original speaker, say the same sentence. They were required to indicate which was the original speaker. The task was repeated with seven different sentences and sets of speakers. One group of subjects heard short sentences containing an average of 2.14 different vowel sounds and 6.28 syllables, another group heard short sentences containing an average of 6.14 vowel sounds (7.28 syllables) and a third group heard longer sentences containing an average of 6.28 vowel sounds (11.00 syllables). Accuracy of speaker identification improved significantly when more vowel sounds were heard, but increased sentence length had no significant effect on performance. Performance was significantly better when the listener was the same sex as the speaker than when the listener was of the other sex.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号