首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This pilot study concerned the intelligibility of accented speech for listeners of different ages. 72 native speakers of English, representing three age groups (20-39, 40-59, 60 and older) listened to words and sentences produced by native speakers of English, Taiwanese, and Spanish. Listeners transcribed words and sentences. Listeners also rated speakers' comprehensibility, i.e., listeners' perceptions of difficulty in understanding utterances, and accentedness, i.e., how strong a speaker's foreign accent is perceived to be. On intelligibility measures, older adults had significantly greater difficulty in understanding individuals with accented speech than the other two age groups. Listeners, regardless of age, were more likely to provide correct responses if they perceived the speaker easier to understand. Ratings of comprehensibility were highly correlated with ratings of accentedness.  相似文献   

2.
Borden’s (1979, 1980) hypothesis that speakers with vulnerable speech systems rely more heavily on feedback monitoring than do speakers with less vulnerable systems was investigated. The second language (L2) of a speaker is vulnerable, in comparison with the native language, so alteration to feedback should have a detrimental effect on it, according to this hypothesis. Here, we specifically examined whether altered auditory feedback has an effect on accent strength when speakers speak L2. There were three stages in the experiment. First, 6 German speakers who were fluent in English (their L2) were recorded under six conditions—normal listening, amplified voice level, voice shifted in frequency, delayed auditory feedback, and slowed and accelerated speech rate conditions. Second, judges were trained to rate accent strength. Training was assessed by whether it was successful in separating German speakers speaking English from native English speakers, also speaking English. In the final stage, the judges ranked recordings of each speaker from the first stage as to increasing strength of German accent. The results show that accents were more pronounced under frequency-shifted and delayed auditory feedback conditions than under normal or amplified feedback conditions. Control tests were done to ensure that listeners were judging accent, rather than fluency changes caused by altered auditory feedback. The findings are discussed in terms of Borden’s hypothesis and other accounts about why altered auditory feedback disrupts speech control.  相似文献   

3.
Borden's (1979, 1980) hypothesis that speakers with vulnerable speech systems rely more heavily on feedback monitoring than do speakers with less vulnerable systems was investigated. The second language (L2) of a speaker is vulnerable, in comparison with the native language, so alteration to feedback should have a detrimental effect on it, according to this hypothesis. Here, we specifically examined whether altered auditory feedback has an effect on accent strength when speakers speak L2. There were three stages in the experiment. First, 6 German speakers who were fluent in English (their L2) were recorded under six conditions--normal listening, amplified voice level, voice shifted in frequency, delayed auditory feedback, and slowed and accelerated speech rate conditions. Second, judges were trained to rate accent strength. Training was assessed by whether it was successful in separating German speakers speaking English from native English speakers, also speaking English. In the final stage, the judges ranked recordings of each speaker from the first stage as to increasing strength of German accent. The results show that accents were more pronounced under frequency-shifted and delayed auditory feedback conditions than under normal or amplified feedback conditions. Control tests were done to ensure that listeners were judging accent, rather than fluency changes caused by altered auditory feedback. The findings are discussed in terms of Borden's hypothesis and other accounts about why altered auditory feedback disrupts speech control.  相似文献   

4.
Listeners must cope with a great deal of variability in the speech signal, and thus theories of speech perception must also account for variability, which comes from a number of sources, including variation between accents. It is well known that there is a processing cost when listening to speech in an accent other than one's own, but recent work has suggested that this cost is reduced when listening to a familiar accent widely represented in the media, and/or when short amounts of exposure to an accent are provided. Little is known, however, about how these factors (long-term familiarity and short-term familiarization with an accent) interact. The current study tested this interaction by playing listeners difficult-to-segment sentences in noise, before and after a familiarization period where the same sentences were heard in the clear, allowing us to manipulate short-term familiarization. Listeners were speakers of either Glasgow English or Standard Southern British English, and they listened to speech in either their own or the other accent, thereby allowing us to manipulate long-term familiarity. Results suggest that both long-term familiarity and short-term familiarization mitigate the perceptual processing costs of listening to an accent that is not one's own, but seem not to compensate for them entirely, even when the accent is widely heard in the media.  相似文献   

5.
What happens when speakers try to "dodge" a question they would rather not answer by answering a different question? In 4 studies, we show that listeners can fail to detect dodges when speakers answer similar-but objectively incorrect-questions (the "artful dodge"), a detection failure that goes hand-in-hand with a failure to rate dodgers more negatively. We propose that dodges go undetected because listeners' attention is not usually directed toward a goal of dodge detection (i.e., Is this person answering the question?) but rather toward a goal of social evaluation (i.e., Do I like this person?). Listeners were not blind to all dodge attempts, however. Dodge detection increased when listeners' attention was diverted from social goals toward determining the relevance of the speaker's answers (Study 1), when speakers answered a question egregiously dissimilar to the one asked (Study 2), and when listeners' attention was directed to the question asked by keeping it visible during speakers' answers (Study 4). We also examined the interpersonal consequences of dodge attempts: When listeners were guided to detect dodges, they rated speakers more negatively (Study 2), and listeners rated speakers who answered a similar question in a fluent manner more positively than speakers who answered the actual question but disfluently (Study 3). These results add to the literatures on both Gricean conversational norms and goal-directed attention. We discuss the practical implications of our findings in the contexts of interpersonal communication and public debates.  相似文献   

6.
The present study investigated the perception and production of English /w/ and /v/ by native speakers of Sinhala, German, and Dutch, with the aim of examining how their native language phonetic processing affected the acquisition of these phonemes. Subjects performed a battery of tests that assessed their identification accuracy for natural recordings, their degree of spoken accent, their relative use of place and manner cues, the assimilation of these phonemes into native-language categories, and their perceptual maps (i.e., multidimensional scaling solutions) for these phonemes. Most Sinhala speakers had near-chance identification accuracy, Germans ranged from chance to 100% correct, and Dutch speakers had uniformly high accuracy. The results suggest that these learning differences were caused more by perceptual interference than by category assimilation; Sinhala and German speakers both have a single native-language phoneme that is similar to English /w/ and /v/, but the auditory sensitivities of Sinhala speakers make it harder for them to discern the acoustic cues that are critical to /w/-/v/ categorization.  相似文献   

7.
Factors affecting perceptions of occupational suitability were examined for speakers who stutter and speakers who do not stutter. In Experiment 1, 58 adults who do not stutter heard one of two audio recordings (less severe stuttering, more severe stuttering) of a speaker who stuttered. Participants rated the speaker's communicative functioning, personal attributes, and suitability for 32 occupations, along with perceptions of the occupations' speaking demands and educational requirements. Perceived speaking demand strongly affected occupational suitability ratings at both levels of stuttering severity. In Experiment 2, 58 additional adults who do not stutter heard a recording of another adult in one of two conditions (fluent speech, pseudo-stuttering), and provided the same ratings as in Experiment 1. In the pseudo-stuttering condition, participants' perceptions of occupational speaking demand again had a strong effect on occupational suitability ratings. In the fluent condition, suitability ratings were affected primarily by perceived educational demand; perceived speaking demand was of secondary importance. Across all participants in Experiment 2, occupational suitability ratings were associated with ratings of the speaker's personal attributes and communicative functioning. In both experiments, speakers who stuttered received lower suitability ratings for high speaking demand occupations than for low speaking demand occupations. Ratings for many high speaking occupations, however, fell just below the midpoint of the occupational suitability scale, suggesting that participants viewed these occupations as less appropriate, but not necessarily inappropriate, for people who stutter. Overall, the findings support the hypothesis that people who stutter may face occupational stereotyping and/or role entrapment in work settings. EDUCATIONAL OBJECTIVES: At the end of this activity the reader will be able to (a) summarize main findings on research related to the work-related experiences of people who stutter, (b) describe factors that affect perceptions of which occupations are best suited for speakers who stutter and speakers who do not stutter, and (c) discuss how findings from the present study relate to previous findings on occupational advice for people who stutter.  相似文献   

8.
Humans imitate each other during social interaction. This imitative behavior streamlines social interaction and aids in learning to replicate actions. However, the effect of imitation on action comprehension is unclear. This study investigated whether vocal imitation of an unfamiliar accent improved spoken-language comprehension. Following a pretraining accent comprehension test, participants were assigned to one of six groups. The baseline group received no training, but participants in the other five groups listened to accented sentences, listened to and repeated accented sentences in their own accent, listened to and transcribed accented sentences, listened to and imitated accented sentences, or listened to and imitated accented sentences without being able to hear their own vocalizations. Posttraining measures showed that accent comprehension was most improved for participants who imitated the speaker's accent. These results show that imitation may aid in streamlining interaction by improving spoken-language comprehension under adverse listening conditions.  相似文献   

9.
The silent period hypothesis was investigated by examining the speech development of AO, a Polish-speaking child, who emigrated to the U.S. at age 7 years, 5 months, and placed in the second grade of a rural Missouri school district in which there was no instruction of English as a second-language. AO was observed for 6 years, 8 months, in order to study the development of his English speech patterns. During this interval, recordings were made of five sentences produced by AO at five different age points and with recordings from a control group of native and nonnative speakers were rated by native American speakers. AO's accent showed a gradual decline during the first year of residence, receiving a rating of near-native speech. By age 14 years, 6 months, he was rated as having native speech performance. Observations of his language, social, and school development indicated that AO remained essentially silent during the first 6 months, using two- and three-word sentences only when necessary, that his social development was normal, and that his school achievement was not impeded by his placement in the grade level appropriate for his age. The conclusion was reached that AO's silent period experience contributed significantly to his development of English speech patterns.  相似文献   

10.
Two experiments examined the effects of processing fluency—that is, the ease with which speech is processed—on language attitudes toward native‐ and foreign‐accented speech. Participants listened to an audio recording of a story read in either a Standard American English (SAE) or Punjabi English (PE) accent. They heard the recording either free of noise or mixed with background white noise of various intensity levels. Listeners attributed more solidarity (but equal status) to the SAE than the PE accent. Compared to quieter listening conditions, noisier conditions reduced processing fluency, elicited a more negative affective reaction, and resulted in more negative language attitudes. Processing fluency and affect mediated the effects of noise on language attitudes. Theoretical, methodological, and practical implications are discussed.  相似文献   

11.
The present study investigated the language familiarity hypothesis formulated by Mackay [(1970). How does language familiarity influence stuttering under delayed auditory feedback? Perceptual and Motor Skills, 30, 655-669] that bilinguals speak faster and stutter less under delayed auditory feedback (DAF) when speaking their more familiar language than a less familiar language. Thirty normally fluent native speakers of Dutch (17 males and 13 females, aged between 18;1 and 26;4 years) who were also proficient in French and English read meaningful and nonsense text under DAF in their mother tongue and in the two later acquired languages. The existence of a language familiarity effect was confirmed. The participants required significantly more time and showed significantly more speech disruptions under DAF in the later acquired languages than in the mother tongue, and reading time and number of speech disruptions was significantly higher for the nonsense texts than for the meaningful text for each of the three languages. An additional question addressed was whether or not there were any gender differences in the susceptibility to DAF. Results did not reveal a clear gender difference. EDUCATIONAL OBJECTIVES: The reader will be able to: (1) summarize the importance of language familiarity for the degree of speech disruption experienced by normally fluent multilingual speakers under delayed auditory feedback; and (2) describe gender differences in the susceptibility to delayed auditory feedback.  相似文献   

12.
A series of experiments investigated the effect of speakers' language, accent, and race on children's social preferences. When presented with photographs and voice recordings of novel children, 5-year-old children chose to be friends with native speakers of their native language rather than foreign-language or foreign-accented speakers. These preferences were not exclusively due to the intelligibility of the speech, as children found the accented speech to be comprehensible, and did not make social distinctions between foreign-accented and foreign-language speakers. Finally, children chose same-race children as friends when the target children were silent, but they chose other-race children with a native accent when accent was pitted against race. A control experiment provided evidence that children's privileging of accent over race was not due to the relative familiarity of each dimension. The results, discussed in an evolutionary framework, suggest that children preferentially evaluate others along dimensions that distinguished social groups in prehistoric human societies.  相似文献   

13.
We explored how speakers and listeners use hand gestures as a source of perceptual-motor information during naturalistic communication. After solving the Tower of Hanoi task either with real objects or on a computer, speakers explained the task to listeners. Speakers’ hand gestures, but not their speech, reflected properties of the particular objects and the actions that they had previously used to solve the task. Speakers who solved the problem with real objects used more grasping handshapes and produced more curved trajectories during the explanation. Listeners who observed explanations from speakers who had previously solved the problem with real objects subsequently treated computer objects more like real objects; their mouse trajectories revealed that they lifted the objects in conjunction with moving them sideways, and this behavior was related to the particular gestures that were observed. These findings demonstrate that hand gestures are a reliable source of perceptual-motor information during human communication.  相似文献   

14.
We conducted three experiments to examine the effects of information about a speaker's status on memory for the assertiveness of his or her remarks. Subjects either read (Experiments 1 and 2) or listened to a conversation (Experiment 3) and were later tested for their memory of the target speaker's remarks with either a recognition (Experiment 1) or a recall procedure (Experiments 2 and 3). In all experiments the target speaker's ostensible status was manipulated. In Experiment 1, subjects who believed the speaker was high in status were less able later to distinguish between remarks from the conversation and assertive paraphrases of those remarks. This result was replicated in Experiment 2, but only when the status information was provided before subjects read the conversation and not when the information was provided after the conversation had been read. Experiment 2's results eliminate a reconstructive memory interpretation and suggest that information about a speaker's status affects the encoding of remarks. Experiment 3 examined this effect in a more ecologically representative context.  相似文献   

15.
Many studies in bilingual visual word recognition have demonstrated that lexical access is not language selective. However, research on bilingual word recognition in the auditory modality has been scarce, and it has yielded mixed results with regard to the degree of this language nonselectivity. In the present study, we investigated whether listening to a second language (L2) is influenced by knowledge of the native language (L1) and, more important, whether listening to the L1 is also influenced by knowledge of an L2. Additionally, we investigated whether the listener's selectivity of lexical access is influenced by the speaker's L1 (and thus his or her accent). With this aim, Dutch-English bilinguals completed an English (Experiment 1) and a Dutch (Experiment 3) auditory lexical decision task. As a control, the English auditory lexical decision task was also completed by English monolinguals (Experiment 2). Targets were pronounced by a native Dutch speaker with English as the L2 (Experiments 1A, 2A, and 3A) or by a native English speaker with Dutch as the L2 (Experiments 1B, 2B, and 3B). In all experiments, Dutch-English bilinguals recognized interlingual homophones (e.g., lief [sweet]-leaf /li:f/) significantly slower than matched control words, whereas the English monolinguals showed no effect. These results indicate that (a) lexical access in bilingual auditory word recognition is not language selective in L2, nor in L1, and (b) language-specific subphonological cues do not annul cross-lingual interactions.  相似文献   

16.
Some influences of accent structure on melody recognition   总被引:1,自引:0,他引:1  
Two experiments were carried out to investigate the roles of joint accent structure and familiarity in delayed recognition of relatively long tonal melodies. Melodic themes of target melodies were defined by correlating contour-related pitch accents with temporal accents (accent coupling) during an initial familiarization phase. Later, subjects gave recognition responses to key-transposed versions of the target melodies as well as to decoys with same and different contour accent patterns. In Experiment 1, all to-be-recognized melodies occurred both in an original rhythm, which preserved accent coupling, and in a new rhythm, which did not. Listeners were best at distinguishing targets from different decoys, especially in the original rhythm. In Experiment 2, the familiarity of target tunes and the rhythmic similarity in recognition were varied. Similar rhythms preserved accent coupling, whereas dissimilar rhythms did not. Listeners were most adept in distinguishing familiar targets from different decoys (Experiment 2A), particularly when they appeared in novel but similar rhythms. However, in similar rhythm conditions, listeners also frequently mistook same decoys for targets. With less familiar targets (Experiment 2B), these effects were attenuated, and performance showed general effects of pitch contour.  相似文献   

17.
Brief experience with reliable spectral characteristics of a listening context can markedly alter perception of subsequent speech sounds, and parallels have been drawn between auditory compensation for listening context and visual color constancy. In order to better evaluate such an analogy, the generality of acoustic context effects for sounds with spectral-temporal compositions distinct from speech was investigated. Listeners identified nonspeech sounds—extensively edited samples produced by a French horn and a tenor saxophone—following either resynthesized speech or a short passage of music. Preceding contexts were “colored” by spectral envelope difference filters, which were created to emphasize differences between French horn and saxophone spectra. Listeners were more likely to report hearing a saxophone when the stimulus followed a context filtered to emphasize spectral characteristics of the French horn, and vice versa. Despite clear changes in apparent acoustic source, the auditory system calibrated to relatively predictable spectral characteristics of filtered context, differentially affecting perception of subsequent target nonspeech sounds. This calibration to listening context and relative indifference to acoustic sources operates much like visual color constancy, for which reliable properties of the spectrum of illumination are factored out of perception of color.  相似文献   

18.
19.
Listeners identified spoken words, letters, and numbers and the spatial location of these utterances in three listening conditions as a function of the number of simultaneously presented utterances. The three listening conditions were a normal listening condition, in which the sounds were presented over seven possible loudspeakers to a listener seated in a sound-deadened listening room; a one-headphone listening condition, in which a single microphone that was placed in the listening room delivered the sounds to a single headphone worn by the listener in a remote room; and a stationary KEMAR listening condition, in which binaural recordings from an acoustic manikin placed in the listening room were delivered to a listener in the remote room. The listeners were presented one, two, or three simultaneous utterances. The results show that utterance identification was better in the normal listening condition than in the one-headphone condition, with the KEMAR listening condition yielding intermediate levels of performance. However, the differences between listening in the normal and in the one-headphone conditions were much smaller when two, rather than three, utterances were presented at a time. Localization performance was good for both the normal and the KEMAR listening conditions and at chance for the one-headphone condition. The results suggest that binaural processing is probably more important for solving the “cocktail party” problem when there are more than two concurrent sound sources.  相似文献   

20.
Linguistic background has been identified as important in the perception of pitch, particularly between tonal versus nontonal languages. In addition, a link between native language and the perception of musical pitch has also been established. This pilot study examined the perception of pitch between listeners from tonal and nontonal linguistic cultures where two different styles of music originate. Listeners were 10 individuals born in China who ranged in age from 25 to 37 years and had spent on the average 30 mo. in the USA and 10 individuals, born on the Indian subcontinent, who ranged in age from 22 to 31 years, and had spent an average of 13 mo. in the USA. Listeners from both groups participated in two conditions. One condition involved listening to a selection of music characteristic of the individual's culture (China, pentatonic scale; Indian subcontinent, microtones), and one condition involved no music. All listeners within each condition participated in two voice pitch-matching tasks. One task involved matching the lowest and highest pitch of tape-recorded voices to a note on an electronic keyboard. Another task involved matching the voice pitch of tape-recorded orally read words to a note on the keyboard. There were no differences between the two linguistic groups. Methodological limitations preclude generalization but provide the basis for further research.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号