首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
We investigated how the strength of a foreign accent and varying types of experience with foreign-accented speech influence the recognition of accented words. In Experiment 1, native Dutch listeners with limited or extensive prior experience with German-accented Dutch completed a cross-modal priming experiment with strongly, medium, and weakly accented words. Participants with limited experience were primed by the medium and weakly accented words, but not by the strongly accented words. Participants with extensive experience were primed by all accent types. In Experiments 2 and 3, Dutch listeners with limited experience listened to a short story before doing the cross-modal priming task. In Experiment 2, the story was spoken by the priming task speaker and either contained strongly accented words or did not. Strongly accented exposure led to immediate priming by novel strongly accented words, while exposure to the speaker without strongly accented tokens led to priming only in the experiment’s second half. In Experiment 3, listeners listened to the story with strongly accented words spoken by a different German-accented speaker. Listeners were primed by the strongly accented words, but again only in the experiment’s second half. Together, these results show that adaptation to foreign-accented speech is rapid but depends on accent strength and on listener familiarity with those strongly accented words.  相似文献   

2.
In spelling-to-dictation tasks, skilled spellers consistently initiate spelling of high-frequency words faster than that of low-frequency words. Tainturier and Rapp's model of spelling shows three possible loci for this frequency effect: spoken word recognition, orthographic retrieval, and response execution of the first letter. Thus far, researchers have attributed the effect solely to orthographic retrieval without considering spoken word recognition or response execution. To investigate word frequency effects at each of these three loci, Experiment 1 involved a delayed spelling-to-dictation task and Experiment 2 involved a delayed/uncertain task. In Experiment 1, no frequency effect was found in the 1200-ms delayed condition, suggesting that response execution is not affected by word frequency. In Experiment 2, no frequency effect was found in the delayed/uncertain task that reflects the orthographic retrieval, whereas a frequency effect was found in the comparison immediate/uncertain task that reflects both spoken word recognition and orthographic retrieval. The results of this two-part study suggest that frequency effects in spoken word recognition play a substantial role in skilled spelling-to-dictation. Discrepancies between these findings and previous research, and the limitations of the present study, are discussed.  相似文献   

3.
Recent data suggest that the first presentation of a foreign accent triggers a delay in word identification, followed by a subsequent adaptation. This study examines under what conditions the delay resumes to baseline level. The delay will be experimentally induced by the presentation of sentences spoken to listeners in a foreign or a regional accent as part of a lexical decision task for words placed at the end of sentences. Using a blocked design of accents presentation, Experiment 1 shows that accent changes cause a temporary perturbation in reaction times, followed by a smaller but long-lasting delay. Experiment 2 shows that the initial perturbation is dependent on participants’ expectations about the task. Experiment 3 confirms that the subsequent long-lasting delay in word identification does not habituate after repeated exposure to the same accent. Results suggest that comprehensibility of accented speech, as measured by reaction times, does not benefit from accent exposure, contrary to intelligibility.  相似文献   

4.
Many studies in bilingual visual word recognition have demonstrated that lexical access is not language selective. However, research on bilingual word recognition in the auditory modality has been scarce, and it has yielded mixed results with regard to the degree of this language nonselectivity. In the present study, we investigated whether listening to a second language (L2) is influenced by knowledge of the native language (L1) and, more important, whether listening to the L1 is also influenced by knowledge of an L2. Additionally, we investigated whether the listener's selectivity of lexical access is influenced by the speaker's L1 (and thus his or her accent). With this aim, Dutch-English bilinguals completed an English (Experiment 1) and a Dutch (Experiment 3) auditory lexical decision task. As a control, the English auditory lexical decision task was also completed by English monolinguals (Experiment 2). Targets were pronounced by a native Dutch speaker with English as the L2 (Experiments 1A, 2A, and 3A) or by a native English speaker with Dutch as the L2 (Experiments 1B, 2B, and 3B). In all experiments, Dutch-English bilinguals recognized interlingual homophones (e.g., lief [sweet]-leaf /li:f/) significantly slower than matched control words, whereas the English monolinguals showed no effect. These results indicate that (a) lexical access in bilingual auditory word recognition is not language selective in L2, nor in L1, and (b) language-specific subphonological cues do not annul cross-lingual interactions.  相似文献   

5.
In six experiments with English‐learning infants, we examined the effects of variability in voice and foreign accent on word recognition. We found that 9‐month‐old infants successfully recognized words when two native English talkers with dissimilar voices produced test and familiarization items ( Experiment 1 ). When the domain of variability was shifted to include variability in voice as well as in accent, 13‐, but not 9‐month‐olds, recognized a word produced across talkers when only one had a Spanish accent ( Experiments 2 and 3 ). Nine‐month‐olds accommodated some variability in accent by recognizing words when the same Spanish‐accented talker produced familiarization and test items ( Experiment 4 ). However, 13‐, but not 9‐month‐olds, could do so when test and familiarization items were produced by two distinct Spanish‐accented talkers ( Experiments 5 and 6 ). These findings suggest that, although monolingual 9‐month‐olds have abstract phonological representations, these representations may not be flexible enough to accommodate the modifications found in foreign‐accented speech.  相似文献   

6.
Thorpe K  Fernald A 《Cognition》2006,100(3):389-433
Three studies investigated how 24-month-olds and adults resolve temporary ambiguity in fluent speech when encountering prenominal adjectives potentially interpretable as nouns. Children were tested in a looking-while-listening procedure to monitor the time course of speech processing. In Experiment 1, the familiar and unfamiliar adjectives preceding familiar target nouns were accented or deaccented. Target word recognition was disrupted only when lexically ambiguous adjectives were accented like nouns. Experiment 2 measured the extent of interference experienced by children when interpreting prenominal words as nouns. In Experiment 3, adults used prosodic cues to identify the form class of adjective/noun homophones in string-identical sentences before the ambiguous words were fully spoken. Results show that children and adults use prosody in conjunction with lexical and distributional cues to ‘listen through’ prenominal adjectives, avoiding costly misinterpretation.  相似文献   

7.
This study examined whether children use prosodic correlates to word meaning when interpreting novel words. For example, do children infer that a word spoken in a deep, slow, loud voice refers to something larger than a word spoken in a high, fast, quiet voice? Participants were 4- and 5-year-olds who viewed picture pairs that varied along a single dimension (e.g., big vs. small flower) and heard a recorded voice asking them, for example, “Can you get the blicket one?” spoken with either meaningful or neutral prosody. The 4-year-olds failed to map prosodic cues to their corresponding meaning, whereas the 5-year-olds succeeded (Experiment 1). However, 4-year-olds successfully mapped prosodic cues to word meaning following a training phase that reinforced children’s attention to prosodic information (Experiment 2). These studies constitute the first empirical demonstration that young children are able to use prosody-to-meaning correlates as a cue to novel word interpretation.  相似文献   

8.
Translation in fluent bilinguals requires comprehension of a stimulus word and subsequent production, or retrieval and articulation, of the response word. Four repetition-priming experiments with Spanish–English bilinguals (N = 274) decomposed these processes using selective facilitation to evaluate their unique priming contributions and factorial combination to evaluate the degree of process overlap or dependence. In Experiment 1, symmetric priming between semantic classification and translation tasks indicated that bilinguals do not covertly translate words during semantic classification. In Experiments 2 and 3, semantic classification of words and word-cued picture drawing facilitated word-comprehension processes of translation, and picture naming facilitated word-production processes. These effects were independent, consistent with a sequential model and with the conclusion that neither semantic classification nor word-cued picture drawing elicits covert translation. Experiment 4 showed that 2 tasks involving word-retrieval processes--written word translation and picture naming--had subadditive effects on later translation. Incomplete transfer from written translation to spoken translation indicated that preparation for articulation also benefited from repetition in the less-fluent language.  相似文献   

9.
We investigated whether the sociolinguistic information delivered by spoken, accented postevent narratives would influence the misinformation effect. New Zealand subjects listened to misleading postevent information spoken in either a New Zealand (NZ) or North American (NA) accent. Consistent with earlier research, we found that NA accents were seen as more powerful and more socially attractive. We found that accents per se had no influence on the misinformation effect but sociolinguistic factors did: both power and social attractiveness affected subjects' susceptibility to misleading postevent suggestions. When subjects rated the speaker highly on power, social attractiveness did not matter; they were equally misled. However, when subjects rated the speaker low on power, social attractiveness did matter: subjects who rated the speaker high on social attractiveness were more misled than subjects who rated it lower. There were similar effects for confidence. These results have implications for our understanding of social influences on the misinformation effect.  相似文献   

10.
《Memory (Hove, England)》2013,21(1):101-109
We investigated whether the sociolinguistic information delivered by spoken, accented postevent narratives would influence the misinformation effect. New Zealand subjects listened to misleading postevent information spoken in either a New Zealand (NZ) or North American (NA) accent. Consistent with earlier research, we found that NA accents were seen as more powerful and more socially attractive. We found that accents per se had no influence on the misinformation effect but sociolinguistic factors did: both power and social attractiveness affected subjects' susceptibility to misleading postevent suggestions. When subjects rated the speaker highly on power, social attractiveness did not matter; they were equally misled. However, when subjects rated the speaker low on power, social attractiveness did matter: subjects who rated the speaker high on social attractiveness were more misled than subjects who rated it lower. There were similar effects for confidence. These results have implications for our understanding of social influences on the misinformation effect.  相似文献   

11.
Written word frequency (e.g., Francis & Ku6era, 1982; Kucera & Francis, 1967) constitutes apopular measure of word familiarity, which is highly predictive of word recognition. Far less often, researchers employ spoken frequency counts in their studies. This discrepancy can be attributed most readily to the conspicuous absence of a sizeable spoken frequency count for American English. The present article reports the construction of a 1.6-million-word spoken frequency database derived from the Michigan Corpus of Academic Spoken English (Simpson, Swales, & Briggs, 2002). We generated spoken frequency counts for 34,922 words and extracted speaker attributes from the source material to generate relative frequencies of words spoken by each speaker category. We assessthe predictive validity of these counts, and discuss some possible applications outside of word recognition studies.  相似文献   

12.
What can be done to help college students who are not native speakers of English learn from computer‐based lessons that are presented in English? To help students access the meaning of spoken words in a slow‐paced 16‐minute narration about wildlife in Antarctica, a representational video was added that showed the scenes and animals being described in the narration (Experiment 1). Adding video resulted in improved performance of non‐native English speakers on a comprehension test (d = 0.63), perhaps because the video improved access to word meaning without creating extraneous cognitive load. To help students perceive the spoken words in a fast‐paced 9‐minute narrated video about chemical reactions, concurrent on‐screen captions were added (Experiment 2). Adding on‐screen captions did not improve performance by non‐native English speakers on comprehension tests, perhaps because learners did not have available capacity to take advantage of the captions. Implications for cognitive load theory are discussed. Copyright © 2014 John Wiley & Sons, Ltd.  相似文献   

13.
Does the spelling of a word mandatorily constrain spoken word production, or does it do so only when spelling is relevant for the production task at hand? Damian and Bowers (2003) reported spelling effects in spoken word production in English using a prompt-response word generation task. Preparation of the response words was disrupted when the responses shared initial phonemes that differed in spelling, suggesting that spelling constrains speech production mandatorily. The present experiments, conducted in Dutch, tested for spelling effects using word production tasks in which spelling was clearly relevant (oral reading in Experiment 1) or irrelevant (object naming and word generation in Experiments 2 and 3, respectively). Response preparation was disrupted by spelling inconsistency only with the word reading, suggesting that the spelling of a word constrains spoken word production in Dutch only when it is relevant for the word production task at hand.  相似文献   

14.
We evaluated whether movement modulates the semantic processing of words. To this end, we used homograph words with two meanings, one associated with hand movements (e.g., ‘abanico’, ‘fan’ in Spanish) or foot movements (‘bota’, ‘boot’ in Spanish), and the other not associated with movement (‘abanico’, ‘range’ in Spanish; ‘bota’, ‘wineskin’ in Spanish). After the homograph, three words were presented, and participants were asked to choose the word related to one of the two homograph meanings. The words could be either related to the motor meaning of the homograph (‘fan-heat’), to the non-motor meaning of the homograph (‘range-possibility’) or unrelated (‘fan-phone’). The task was performed without movement (simple condition) or by performing hand (Experiment 1) and foot (Experiment 2) movements. Compared with the simple condition, the performance of movement oriented the preference towards the motor meaning of the homograph. This pattern of results confirms that movement modulates word comprehension.  相似文献   

15.
This study used event-related potentials (ERPs) to examine whether we employ the same normalisation mechanisms when processing words spoken with a regional accent or foreign accent. Our results showed that the Phonological Mapping Negativity (PMN) following the onset of the final word of sentences spoken with an unfamiliar regional accent was greater than for those produced in the listener’s own accent, whilst PMN for foreign accented speech was reduced. Foreign accents also resulted in a reduction in N400 amplitude when compared to both unfamiliar regional accents and the listener’s own accent, with no significant difference found between the N400 of the regional and home accents. These results suggest that regional accent related variations are normalised at the earliest stages of spoken word recognition, requiring less top-down lexical intervention than foreign accents.  相似文献   

16.
ABSTRACT

More cognitive resources are required to comprehend foreign-accented than native speech. Focusing these cognitive resources on resolving the acoustic mismatch between the foreign-accented input and listeners’ stored representations of spoken words can affect other cognitive processes. Across two studies, we explored whether processing foreign-accented speech reduces the activation of semantic information. This was achieved using the DRM paradigm, in which participants study word lists and typically falsely remember non-studied words (i.e. critical lures) semantically associated with the studied words. In two experiments, participants were presented with word lists spoken both by a native and a foreign-accented speaker. In both experiments we observed lower false recognition rates for the critical lures associated with word lists presented in a foreign accent, compared to native speech. In addition, participants freely recalled more studied words when they had been presented in a native, compared to a foreign, accent, although this difference only emerged in Experiment 2, where the foreign speaker had a very strong accent. These observations suggest that processing foreign-accented speech modulates the activation of semantic information.

Highlights
  • The DRM paradigm was used to explore whether semantic activation is reduced when processing foreign-accented speech.

  • Across two experiments, false recognition of non-studied semantic associates was lower when word lists were presented in a foreign accent, compared to native speech.

  • The above results suggest semantic activation may be reduced when processing foreign-accented speech.

  • Additionally, it was found that when the foreign speaker had a mild accent, correct recall of studied words was uninfluenced. If the foreign speaker had a strong accent, however, correct recall of studied words was reduced.

  相似文献   

17.
The predictability of upcoming words facilitates both spoken and written language comprehension. One interesting difference between these language modalities is that readers’ routinely have access to upcoming words in parafoveal vision while listeners must wait for each fleeting word from a speaker. Despite readers’ potential glimpse into the future, it is not clear if and how this bottom-up information aids top-down prediction. The current study manipulated the predictability of target words and their location on a line of text. Targets were located in the middle of the line (preview available) or as the first word on a new line (preview unavailable). This represents an innovative method for manipulating parafoveal preview which utilises return sweeps to deny access to parafoveal preview of target words without the use of invalid previews. The study is the first to demonstrate gaze duration word predictability effects in the absence of parafoveal preview.  相似文献   

18.
Two experiments investigated the mechanism by which listeners adjust their interpretation of accented speech that is similar to a regional dialect of American English. Only a subset of the vowels of English (the front vowels) were shifted during adaptation, which consisted of listening to a 20-min segment of the "Wizard of Oz." Compared to a baseline (unadapted) condition, listeners showed significant adaptation to the accented speech, as indexed by increased word judgments on a lexical decision task. Adaptation also generalized to test words that had not been presented in the accented passage but that contained the shifted vowels. A control experiment showed that the adaptation effect was specific to the direction of the shift in the vowel space and not to a general relaxation of the criterion for what constitutes a good exemplar of the accented vowel category. Taken together, these results provide evidence for a context-specific vowel adaptation mechanism that enables a listener to adjust to the dialect of a particular talker.  相似文献   

19.
The role of intonation in conveying discourse relationships in auditory sentence comprehension was investigated in two experiments. Using the simple comprehension time paradigm, Experiment 1 found that sentences with accented new information were understood faster than sentences with a neutral intonation contour and that the presence of accent in context sentences facilitated comprehension of subsequent targets. Both experiments showed faster comprehension times in conditions in which accent placement was appropriate for the information structure of the sentence. In Experiment I, comprehension times were faster when the accent fell on the information focus than when it fell elsewhere in the sentence. In Experiment 2, faster times resulted when new information was accented and given information was not, compared to conditions in which this accent pattern was reversed. This effect held for both active and passive sentences, and whether the new information occurred in the subject or object position.  相似文献   

20.
Previous work has demonstrated that talker-specific representations affect spoken word recognition relatively late during processing. However, participants in these studies were listening to unfamiliar talkers. In the present research, we used a long-term repetition-priming paradigm and a speeded-shadowing task and presented listeners with famous talkers. In Experiment 1, half the words were spoken by Barack Obama, and half by Hillary Clinton. Reaction times (RTs) to repeated words were shorter than those to unprimed words only when repeated by the same talker. However, in Experiment 2, using nonfamous talkers, RTs to repeated words were shorter than those to unprimed words both when repeated by the same talker and when repeated by a different talker. Taken together, the results demonstrate that talker-specific details can affect the perception of spoken words relatively early during processing when words are spoken by famous talkers.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号