首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Spoken language perception may be constrained by a listener's cognitive resources, including verbal working memory (WM) capacity and basic auditory perception mechanisms. For Japanese listeners, it is unknown how, or even if, these resources are involved in the processing of pitch accent at the word level. The present study examined the extent to which native Japanese speakers could make correctness judgments on and categorize spoken Japanese words by pitch accent pattern, and how verbal WM capacity and acoustic pitch sensitivity related to perception ability. Results showed that Japanese listeners were highly accurate at judging pitch accent correctness (M = 93%), but that the more cognitively demanding accent categorization task yielded notably lower performance (M = 61%). Of chief interest was the finding that acoustic pitch sensitivity significantly predicted accuracy scores on both perception tasks, while verbal WM had a predictive role only for the categorization of a specific accent pattern. These results indicate first, that task demands greatly influence accuracy and second, that basic cognitive capacities continue to support perception of lexical prosody even in adult listeners.  相似文献   

2.
Understanding the circumstances under which talker (and other types of) variability affects language perception represents an important area of research in the field of spoken word recognition. Previous work has demonstrated that talker effects are more likely when processing is relatively slow (McLennan & Luce, 2005). Given that listeners may take longer to process foreign-accented speech than native-accented speech (Munro & Derwing, Language and Speech, 38, 289?C306 1995), talker effects should be more likely when listeners are presented with words spoken in a foreign accent than when they are presented with those same words spoken in a native accent. The results of two experiments, conducted in two different countries and in two different languages, are consistent with this prediction.  相似文献   

3.
Languages such as Swedish use suprasegmental information such as tone, over and above segments, to mark lexical contrast. Theories differ with respect to the abstractness and specification of tone in the mental lexicon. In a forced choice task, we tested Swedish listeners’ responses to words with segmentally identical first syllables differing in tonal contours (characterized as Accents 1 and 2). We assumed Accent 1 to be lexically specified for a subset of words and hypothesized that this specification would speed up word accent identification. As was predicted, listeners were fastest in choosing the tonally correct word when the accent was lexically specified. We conclude that the processing of surface tonal contours is governed by their underlying lexical structure with tonal specification.  相似文献   

4.
Speech carries accent information relevant to determining the speaker’s linguistic and social background. A series of web-based experiments demonstrate that accent cues can modulate access to word meaning. In Experiments 1–3, British participants were more likely to retrieve the American dominant meaning (e.g., hat meaning of “bonnet”) in a word association task if they heard the words in an American than a British accent. In addition, results from a speeded semantic decision task (Experiment 4) and sentence comprehension task (Experiment 5) confirm that accent modulates on-line meaning retrieval such that comprehension of ambiguous words is easier when the relevant word meaning is dominant in the speaker’s dialect. Critically, neutral-accent speech items, created by morphing British- and American-accented recordings, were interpreted in a similar way to accented words when embedded in a context of accented words (Experiment 2). This finding indicates that listeners do not use accent to guide meaning retrieval on a word-by-word basis; instead they use accent information to determine the dialectic identity of a speaker and then use their experience of that dialect to guide meaning access for all words spoken by that person. These results motivate a speaker-model account of spoken word recognition in which comprehenders determine key characteristics of their interlocutor and use this knowledge to guide word meaning access.  相似文献   

5.
Recent data suggest that the first presentation of a foreign accent triggers a delay in word identification, followed by a subsequent adaptation. This study examines under what conditions the delay resumes to baseline level. The delay will be experimentally induced by the presentation of sentences spoken to listeners in a foreign or a regional accent as part of a lexical decision task for words placed at the end of sentences. Using a blocked design of accents presentation, Experiment 1 shows that accent changes cause a temporary perturbation in reaction times, followed by a smaller but long-lasting delay. Experiment 2 shows that the initial perturbation is dependent on participants’ expectations about the task. Experiment 3 confirms that the subsequent long-lasting delay in word identification does not habituate after repeated exposure to the same accent. Results suggest that comprehensibility of accented speech, as measured by reaction times, does not benefit from accent exposure, contrary to intelligibility.  相似文献   

6.
Speech processing requires sensitivity to long-term regularities of the native language yet demands listeners to flexibly adapt to perturbations that arise from talker idiosyncrasies such as nonnative accent. The present experiments investigate whether listeners exhibit dimension-based statistical learning of correlations between acoustic dimensions defining perceptual space for a given speech segment. While engaged in a word recognition task guided by a perceptually unambiguous voice-onset time (VOT) acoustics to signal beer, pier, deer, or tear, listeners were exposed incidentally to an artificial "accent" deviating from English norms in its correlation of the pitch onset of the following vowel (F0) to VOT. Results across four experiments are indicative of rapid, dimension-based statistical learning; reliance on the F0 dimension in word recognition was rapidly down-weighted in response to the perturbation of the correlation between F0 and VOT dimensions. However, listeners did not simply mirror the short-term input statistics. Instead, response patterns were consistent with a lingering influence of sensitivity to the long-term regularities of English. This suggests that the very acoustic dimensions defining perceptual space are not fixed and, rather, are dynamically and rapidly adjusted to the idiosyncrasies of local experience, such as might arise from nonnative-accent, dialect, or dysarthria. The current findings extend demonstrations of "object-based" statistical learning across speech segments to include incidental, online statistical learning of regularities residing within a speech segment.  相似文献   

7.
8.
The current paper examines an “other-accent” effect when recognising voices. English and Scottish listeners were tested with English and Scottish voices using a sequential lineup method. The results suggested greater accuracy for own-accent voices than for other-accent voices under both target-present and target-absent conditions. Moreover, self-rated confidence in response to target-absent lineups suggested greater confidence for own-accent voices than other-accent voices. As predicted, the other-accent effect noted here emerged more strongly for English listeners than for Scottish listeners, and these results are discussed within an expertise framework alongside both other-race effects in face recognition, and other-accent effects in word recognition. Given these results, caution is advised in the treatment of earwitness evidence when recognising a voice of another accent.  相似文献   

9.
Evidence is presented that (a) the open and the closed word classes in English have different phonological characteristics, (b) the phonological dimension on which they differ is one to which listeners are highly sensitive, and (c) spoken open- and closed-class words produce different patterns of results in some auditory recognition tasks. What implications might link these findings? Two recent lines of evidence from disparate paradigms—the learning of an artificial language, and natural and experimentally induced misperception of juncture—are summarized, both of which suggest that listeners are sensitive to the phonological reflections of open- vs. closed-class word status. Although these correlates cannot be strictly necessary for efficient processing, if they are present listeners exploit them in making word class assignments. That such a use of phonological information is of value to listeners could be indirect evidence that open- vs. closed-class words undergo different processing operations.  相似文献   

10.
How and when do children become aware that speakers have different accents? While adults readily make a variety of subtle social inferences based on speakers’ accents, findings from children are more mixed: while one line of research suggests that even infants may be acutely sensitive to accent unfamiliarity, other studies suggest that 5‐year‐olds have difficulty identifying accents as different from their own. In an attempt to resolve this paradox, the current study assesses American children's sensitivity to American vs. Dutch accents in two situations. First, in an eye‐tracked sentence processing paradigm where children have previously shown sensitivity to a salient social distinction (gender) from voice cues, 3–5‐year‐old children showed no sensitivity to accent differences. Second, in a social decision‐making task where accent sensitivity has been found in 5‐year‐olds, an age gradient appeared, suggesting that familiar accent preferences emerge slowly between 3 and 7 years. Counter to claims that accent is an early, salient signal of social group, results are more consistent with a protracted learning hypothesis that children need extended exposure to native‐language sound patterns in order to detect that an accent deviates from their own. A video abstract of this article can be viewed at: https://www.youtube.com/watch?v=BQAgy3IFYXA  相似文献   

11.
Speech comprehension is resistant to acoustic distortion in the input, reflecting listeners' ability to adjust perceptual processes to match the speech input. This adjustment is reflected in improved comprehension of distorted speech with experience. For noise vocoding, a manipulation that removes spectral detail from speech, listeners' word report showed a significantly greater improvement over trials for listeners that heard clear speech presentations before rather than after hearing distorted speech (clear-then-distorted compared with distorted-then-clear feedback, in Experiment 1). This perceptual learning generalized to untrained words suggesting a sublexical locus for learning and was equivalent for word and nonword training stimuli (Experiment 2). These findings point to the crucial involvement of phonological short-term memory and top-down processes in the perceptual learning of noise-vocoded speech. Similar processes may facilitate comprehension of speech in an unfamiliar accent or following cochlear implantation.  相似文献   

12.
Visual information conveyed by iconic hand gestures and visible speech can enhance speech comprehension under adverse listening conditions for both native and non‐native listeners. However, how a listener allocates visual attention to these articulators during speech comprehension is unknown. We used eye‐tracking to investigate whether and how native and highly proficient non‐native listeners of Dutch allocated overt eye gaze to visible speech and gestures during clear and degraded speech comprehension. Participants watched video clips of an actress uttering a clear or degraded (6‐band noise‐vocoded) action verb while performing a gesture or not, and were asked to indicate the word they heard in a cued‐recall task. Gestural enhancement was the largest (i.e., a relative reduction in reaction time cost) when speech was degraded for all listeners, but it was stronger for native listeners. Both native and non‐native listeners mostly gazed at the face during comprehension, but non‐native listeners gazed more often at gestures than native listeners. However, only native but not non‐native listeners' gaze allocation to gestures predicted gestural benefit during degraded speech comprehension. We conclude that non‐native listeners might gaze at gesture more as it might be more challenging for non‐native listeners to resolve the degraded auditory cues and couple those cues to phonological information that is conveyed by visible speech. This diminished phonological knowledge might hinder the use of semantic information that is conveyed by gestures for non‐native compared to native listeners. Our results demonstrate that the degree of language experience impacts overt visual attention to visual articulators, resulting in different visual benefits for native versus non‐native listeners.  相似文献   

13.
The question of whether Dutch listeners rely on the rhythmic characteristics of their native language to segment speech was investigated in three experiments. In Experiment 1, listeners were induced to make missegmentations of continuous speech. The results showed that word boundaries were inserted before strong syllables and deleted before weak syllables. In Experiment 2, listeners were required to spot real CVC or CVCC words (C = consonant, V = vowel) embedded in bisyllabic nonsense strings. For CVCC words, fewer errors were made when the second syllable of the nonsense string was weak rather than strong, whereas for CVC words the effect was reversed. Experiment 3 ruled out an acoustic explanation for this effect. It is argued that these results are in line with an account in which both metrical segmentation and lexical competition play a role.  相似文献   

14.
The effect of exposure to the contextual features of the /pt/ cluster was investigated in native-English and native-Polish listeners using behavioral and event-related potential (ERP) methodology. Both groups experience the /pt/ cluster in their languages, but only the Polish group experiences the cluster in the context of word onset examined in the current experiment. The /st/ cluster was used as an experimental control. ERPs were recorded while participants identified the number of syllables in the second word of nonsense word pairs. The results found that only Polish listeners accurately perceived the /pt/ cluster and perception was reflected within a late positive component of the ERP waveform. Furthermore, evidence of discrimination of /pt/ and /p?t/ onsets in the neural signal was found even for non-native listeners who could not perceive the difference. These findings suggest that exposure to phoneme sequences in highly specific contexts may be necessary for accurate perception.  相似文献   

15.
Six speech samples containing varying amounts of whole-word repetitions were tape-recorded and presented to 36 male and 36 female listeners. For each sample, listeners were asked to make judgments of fluent, disfluent, and stuttered speech, and to answer the question, “Would you recommend speech therapy?” Results showed that samples containing 5% or more word repetitions were not judged fluent speech by a majority of listeners. Judgments of disfluent and stuttered speech were nearly equal for speech samples containing word repetitions from 5% to 15%. At 20%, however, the judgments of stuttered speech were found to be more likely than judgments of disfluent speech. A majority of listeners recommended clinical services for speech samples containing 5% or more word repetitions. Generally, the results indicated that (1) the presence of whole-word repetitions is not normal regardless of frequency, (2) fluent speech may not contain 5% or more word repetitions, and (3) with 20% word repetitions the judgments of stuttering may be more likely than judgments of disfluency.  相似文献   

16.
ABSTRACT

More cognitive resources are required to comprehend foreign-accented than native speech. Focusing these cognitive resources on resolving the acoustic mismatch between the foreign-accented input and listeners’ stored representations of spoken words can affect other cognitive processes. Across two studies, we explored whether processing foreign-accented speech reduces the activation of semantic information. This was achieved using the DRM paradigm, in which participants study word lists and typically falsely remember non-studied words (i.e. critical lures) semantically associated with the studied words. In two experiments, participants were presented with word lists spoken both by a native and a foreign-accented speaker. In both experiments we observed lower false recognition rates for the critical lures associated with word lists presented in a foreign accent, compared to native speech. In addition, participants freely recalled more studied words when they had been presented in a native, compared to a foreign, accent, although this difference only emerged in Experiment 2, where the foreign speaker had a very strong accent. These observations suggest that processing foreign-accented speech modulates the activation of semantic information.

Highlights
  • The DRM paradigm was used to explore whether semantic activation is reduced when processing foreign-accented speech.

  • Across two experiments, false recognition of non-studied semantic associates was lower when word lists were presented in a foreign accent, compared to native speech.

  • The above results suggest semantic activation may be reduced when processing foreign-accented speech.

  • Additionally, it was found that when the foreign speaker had a mild accent, correct recall of studied words was uninfluenced. If the foreign speaker had a strong accent, however, correct recall of studied words was reduced.

  相似文献   

17.
信息结构作为语言学的一个重要概念,在语言学、心理学和神经科学等领域进行了广泛的研究。其中,从焦点和背景这一维度对信息结构的研究最多。通常情况下,人们会重读焦点信息。本研究使用ERP技术,通过对话语篇考察了不同位置对比焦点和重读的一致性对口语语篇理解的影响。研究发现,对比焦点不受位置影响,稳定诱发中后部分布正波,且小句末尾焦点诱发的正效应早于小句内部。此外,重读相对于不重读在小句内部和末尾都诱发了正效应,并且出现在较晚的时间窗口。尽管焦点不重读相对于一致性重读没有诱发任何脑电效应,但背景重读相对背景不重读在小句末尾诱发了一个早期负效应。本研究表明,听者按照不同的方式、即时使用不同位置的对比焦点和重读信息建构语篇表征。  相似文献   

18.
Multiple reports have described patients with disordered articulation and prosody, often following acute aphasia, dysarthria, or apraxia of speech, which results in the perception by listeners of a foreign-like accent. These features led to the term foreign accent syndrome (FAS), a speech disorder with perceptual features that suggest an indistinct, non-native speaking accent. Also correctly known as psuedoforeign accent, the speech does not typically match a specific foreign accent, but is rather a constellation of speech features that result in the perception of a foreign accent by listeners. The primary etiologies of FAS are cerebrovascular accidents or traumatic brain injuries which affect cortical and subcortical regions critical to expressive speech and language production. Far fewer cases of FAS associated with psychiatric conditions have been reported. We will present the clinical history, neurological examination, neuropsychological assessment, cognitive-behavioral and biofeedback assessments, and motor speech examination of a patient with FAS without a known vascular, traumatic, or infectious precipitant. Repeated multidisciplinary examinations of this patient provided convergent evidence in support of FAS secondary to conversion disorder. We discuss these findings and their implications for evaluation and treatment of rare neurological and psychiatric conditions.  相似文献   

19.
In six experiments with English‐learning infants, we examined the effects of variability in voice and foreign accent on word recognition. We found that 9‐month‐old infants successfully recognized words when two native English talkers with dissimilar voices produced test and familiarization items ( Experiment 1 ). When the domain of variability was shifted to include variability in voice as well as in accent, 13‐, but not 9‐month‐olds, recognized a word produced across talkers when only one had a Spanish accent ( Experiments 2 and 3 ). Nine‐month‐olds accommodated some variability in accent by recognizing words when the same Spanish‐accented talker produced familiarization and test items ( Experiment 4 ). However, 13‐, but not 9‐month‐olds, could do so when test and familiarization items were produced by two distinct Spanish‐accented talkers ( Experiments 5 and 6 ). These findings suggest that, although monolingual 9‐month‐olds have abstract phonological representations, these representations may not be flexible enough to accommodate the modifications found in foreign‐accented speech.  相似文献   

20.
Everyday speech is littered with disfluency, often correlated with the production of less predictable words (e.g., Beattie & Butterworth [Beattie, G., & Butterworth, B. (1979). Contextual probability and word frequency as determinants of pauses in spontaneous speech. Language and Speech, 22, 201-211.]). But what are the effects of disfluency on listeners? In an ERP experiment which compared fluent to disfluent utterances, we established an N400 effect for unpredictable compared to predictable words. This effect, reflecting the difference in ease of integrating words into their contexts, was reduced in cases where the target words were preceded by a hesitation marked by the word er. Moreover, a subsequent recognition memory test showed that words preceded by disfluency were more likely to be remembered. The study demonstrates that hesitation affects the way in which listeners process spoken language, and that these changes are associated with longer-term consequences for the representation of the message.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号