首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Geudens and Sandra, in their 2003 study, investigated the special role of onsets and rimes in Dutch-speaking children's explicit phonological awareness. In the current study, we tapped implicit phonological knowledge using forced-choice similarity judgment (Experiment 1) and recall of syllable lists (Experiment 2). In Experiment 1, Dutch-speaking prereaders judged rime-sharing pseudowords (/fas/-/mas/) to sound more similar than pseudowords sharing an equally sized nonrime unit (/fas/-/fak/). However, in a syllable recall task (/tepsilonf/, /ris/, /nal/), Dutch-speaking prereaders were as likely to produce recombination errors that broke up the rime (/tepsilons/) as to produce errors that retained the rime (/repsilonf/). Thus, a rime effect was obtained in a task that highlighted the phonological similarity between items sharing their rimes, but this effect disappeared in tasks without repetition of rime units. We conclude that children's sensitivity to rimes depends on similarity relations and might not reflect a fixed perceived structure of spoken syllables.  相似文献   

2.
In three experiments, we examined priming effects where primes were formed by transposing the first and last phoneme of tri‐phonemic target words (e.g., /byt/ as a prime for /tyb/). Auditory lexical decisions were found not to be sensitive to this transposed‐phoneme priming manipulation in long‐term priming (Experiment 1), with primes and targets presented in two separated blocks of stimuli and with unrelated primes used as control condition (/mul/‐/tyb/), while a long‐term repetition priming effect was observed (/tyb/‐/tyb/). However, a clear transposed‐phoneme priming effect was found in two short‐term priming experiments (Experiments 2 and 3), with primes and targets presented in close temporal succession. The transposed‐phoneme priming effect was found when unrelated prime‐target pairs (/mul/‐/tyb/) were used as control and more important when prime‐target pairs sharing the medial vowel (/pys/‐/tyb/) served as control condition, thus indicating that the effect is not due to vocalic overlap. Finally, in Experiment 3, a transposed‐phoneme priming effect was found when primes sharing the medial vowel plus one consonant in an incorrect position with the targets (/byl/‐/tyb/) served as control condition, and this condition did not differ significantly from the vowel‐only condition. Altogether, these results provide further evidence for a role for position‐independent phonemes in spoken word recognition, such that a phoneme at a given position in a word also provides evidence for the presence of words that contain that phoneme at a different position.  相似文献   

3.
This event-related potential (ERP) study examined the impact of phonological variation resulting from a vowel merger on phoneme perception. The perception of the /e/-/ε/ contrast which does not exist in Southern French-speaking regions, and which is in the process of merging in Northern French-speaking regions, was compared to the /ø/-/y/ contrast, which is stable in all French-speaking regions. French-speaking participants from Switzerland for whom the /e/-/ε/ contrast is preserved, but who are exposed to different regional variants, had to perform a same-different task. They first heard four phonemically identical but acoustically different syllables (e.g., /be/-/be/-/be/-/be/), and then heard the test syllable which was either phonemically identical to (/be/) or phonemically different from (/bε/) the preceding context stimuli. The results showed that the unstable /e/-/ε/ contrast only induced a mismatch negativity (MMN), whereas the /ø/-/y/ contrast elicited both a MMN and electrophysiological differences on the P200. These findings were in line with the behavioral results in which responses were slower and more error-prone in the /e/-/ε/ deviant condition than in the /ø/-/y/ deviant condition. Together these findings suggest that the regional variability in the speech input to which listeners are exposed affects the perception of speech sounds in their own accent.  相似文献   

4.
The process of redintegration is thought to use top-down knowledge to repair partly damaged memory traces. We explored redintegration in the immediate recall of lists from a limited pool of partly phonologically redundant pseudowords. In Experiment 1, four kinds of stimuli were created by adding the syllable /ne/ to two-syllable pseudowords, either to the middle (/tepa/ vs. /tenepa/) or to the end (/tepane/), or adding a different syllable to each item (/tepalo/, /vuropi/). The repeated syllable was thought to be available for redintegration. Lists of two-syllable pseudowords were recalled best, items with a redundant end were intermediate, and items with a redundant middle-syllable were as hard as nonredundant three-syllable items. In Experiment 2, the last syllable was predictable from context but not shared between all stimuli, reducing phonological similarity between items. Performance did not differ from the situation with identicallast syllables. In Experiment 3, a shared first syllable had a detrimental effecton memory. An error analysis showed that beneficial redundancy effects were accompanied by harmful similarity effects, impairing memory for nonredundant syllables. The balance between the two effects depended on syllable position.  相似文献   

5.
The perception of consonant clusters that are phonotactically illegal word initially in English (e.g., /tl/, /sr/) was investigated to determine whether listeners’ phonological knowledge of the language influences speech processing. Experiment 1 examined whether the phonotactic context effect (Massaro & Cohen, 1983), a bias toward hearing illegal sequences (e.g., /tl/) as legal (e.g., /tr/), is more likely due to knowledge of the legal phoneme combinations in English or to a frequency effect. In Experiment 2, Experiment 1 was repeated with the clusters occurring word medially to assess whether phonotactic rules of syllabification modulate the phonotactic effect. Experiment 3 examined whether vowel epenthesis, another phonological process, might also affect listeners’ perception of illegal sequences as legal by biasing them to hear a vowel between the consonants of the cluster (e.g., /talee/). Results suggest that knowledge of the phonotactically permissible sequences in English can affect phoneme processing in multiple ways.  相似文献   

6.
Ward J  Simner J 《Cognition》2003,89(3):237-261
This study documents an unusual case of developmental synaesthesia, in which speech sounds induce an involuntary sensation of taste that is subjectively located in the mouth. JIW shows a highly structured, non-random relationship between particular combinations of phonemes (rather than graphemes) and the resultant taste, and this is influenced by a number of fine-grained phonemic properties (e.g. allophony, phoneme ordering). The synaesthesia is not found for environmental sounds. The synaesthesia, in its current form, is likely to have originated during vocabulary acquisition, since it is guided by learned linguistic and conceptual knowledge. The phonemes that trigger a given taste tend to also appear in the name of the corresponding foodstuff (e.g. /I/, /n/ and /s/ can trigger a taste of mince /mIns/) and there is often a semantic association between the triggering word and taste (e.g. the word blue tastes "inky"). The results suggest that synaesthesia does not simply reflect innate connections from one perceptual system to another, but that it can be mediated and/or influenced by a symbolic/conceptual level of representation.  相似文献   

7.
Previous studies have suggested that nonnative (L2) linguistic sounds are accommodated to native language (L1) phonemic categories. However, this conclusion may be compromised by the use of explicit discrimination tests. The present study provides an implicit measure of L2 phoneme discrimination in early bilinguals (Catalan and Spanish). Participants classified the 1st syllable of disyllabic stimuli embedded in lists where the 2nd, task-irrelevant, syllable could contain a Catalan contrastive variation (/epsilon/-/e/) or no variation. Catalan dominants responded more slowly in lists where the 2nd syllable could vary from trial to trial, suggesting an indirect effect of the /epsilon/-/e/ discrimination. Spanish dominants did not suffer this interference, performing indistinguishably from Spanish monolinguals. The present findings provide implicit evidence that even proficient bilinguals categorize L2 sounds according to their L1 representations.  相似文献   

8.
A core area of phonology is the study of phonotactics, or how sounds are linearly combined. Recent cross-linguistic analyses have shown that the phonology determines not only phonotactics but also the articulatory coordination or timing of adjacent sounds. In this article, I explore how the relation between coordination and phonotactics affects speakers producing nonnative sequences. Recent experimental results (Davidson 2005, 2006) have shown that English speakers often repair unattested word-initial sequences (e.g., /zg/, /vz/) by producing the consonants with a less overlapping coordination. A theoretical account of the experimental results employs Gafos's (2002) constraint-based grammar of coordination. In addition to Gafos's Alignment constraints establishing temporal relations between consonants, a family of Release constraints is proposed to encode phonotactic restrictions. The interaction of Alignment and Release constraints accounts for why speakers produce nonnative sequences by failing to adequately overlap the articulation of the consonants. The optimality theoretic analysis also incorporates floating constraints to explain why speakers are not equally accurate on all unattested clusters.  相似文献   

9.
Apraxia of speech and Broca's aphasia both affect voice onset time (VOT) whereas phonemic vowel length distinctions seem to be preserved. Assuming a close cooperation of anterior perisylvian language zones and the cerebellum with respect to speech timing, a similar profile of segment durations must be expected in ataxic dysarthria. In order to test this hypothesis, patients with cerebellar atrophy or cerebellar ischemia were asked to produce sentence utterances including either one of the German lexial items "Rate" (/ra:t(h)e/, 'installment'), "Ratte" (/rat(h)e/, 'rat'), "Gram" (/gra:m/, 'grief'), "Gramm" (/gram/, 'gramm'), "Taten" (/t(h)atn/, 'actions'), or "Daten" (/datn/, 'data'). At the acoustic signal, the duration of the target vowels /a/ and /a:/ as well as the VOT of the word-initial alveolar stops /d/ and /t/ were determined. In addition, a master tape comprising the target words from patients and controls in randomized order was played to three listeners for perceptual evaluation. In accordance with a previous study, first, the cerebellar subjects presented with a reduced categorical separation of the VOT of voiced and unvoiced stop consonants. Second, vowel length distinctions were only compromised in case of the minimal pair "Gram"/"Gramm." In contrast to "Rate"/"Ratte", production of the former lexical items requires coordination of several orofacial structures. Disruption of vowel length contrasts would, thus, depend upon the complexity of the underlying articulatory pattern.  相似文献   

10.
Different kinds of speech sounds are used to signify possible word forms in every language. For example, lexical stress is used in Spanish (/‘be.be/, ‘he/she drinks’ versus /be.’be/, ‘baby’), but not in French (/‘be.be/ and /be.’be/ both mean ‘baby’). Infants learn many such native language phonetic contrasts in their first year of life, likely using a number of cues from parental speech input. One such cue could be parents’ object labeling, which can explicitly highlight relevant contrasts. Here we ask whether phonetic learning from object labeling is abstract—that is, if learning can generalize to new phonetic contexts. We investigate this issue in the prosodic domain, as the abstraction of prosodic cues (like lexical stress) has been shown to be particularly difficult. One group of 10-month-old French-learners was given consistent word labels that contrasted on lexical stress (e.g., Object A was labeled /‘ma.bu/, and Object B was labeled /ma.’bu/). Another group of 10-month-olds was given inconsistent word labels (i.e., mixed pairings), and stress discrimination in both groups was measured in a test phase with words made up of new syllables. Infants trained with consistently contrastive labels showed an earlier effect of discrimination compared to infants trained with inconsistent labels. Results indicate that phonetic learning from object labeling can indeed generalize, and suggest one way infants may learn the sound properties of their native language(s).  相似文献   

11.
Discrimination of speech sounds from three computer-generated continua that ranged from voiced to voiceless syllables (/ba-pa/, /da-ta/, and ga-ha/ was tested with three macaques. The stimuli on each continuum varied in voice-onset time (VOT). Paris of stimuli that were equally different in VOT were chosen such that they were either within-category pairs (syllables given the same phonetic label by human listeners) or between-category paks (syllables given different phonetic labels by human listeners). Results demonstrated that discrimination performance was always best for between-category pairs of stimuli, thus replicating the “phoneme boundary effect” seen in adult listeners and in human infants as young as I month of age. The findings are discussed in terms of their specific impact on accounts of voicing perception in human listeners and in terms of their impact on discussions of the evolution of language.  相似文献   

12.
On the basis of the lexical corpus created by Amano and Kondo (2000), using the Asahi newspaper, the present study provides frequencies of occurrence for units of Japanese phonemes, morae, and syllables. Among the five vowels, /a/ (23.42%), /i/ (21.54%), /u/ (23.47%), and /o/ (20.63%) showed similar frequency rates, whereas /e/ (10.94%) was less frequent. Among the 12 consonants, /k/ (17.24%), /t/ (15.53%), and /r/ (13.11%) were used often, whereas /p/ (0.60%) and /b/ (2.43%) appeared far less frequently. Among the contracted sounds, /sj/ (36.44%) showed the highest frequency, whereas /mj/ (0.27%) rarely appeared. Among the five long vowels, /aR/ (34.4%) was used most frequently, whereas /uR/ (12.11%) was not used so often. The special sound /N/ appeared very frequently in Japanese. The syllable combination /k/+V+/N/ (19.91%) appeared most frequently among syllabic combinations with the nasal /N/. The geminate (or voiceless obstruent) /Q/, when placed before the four consonants /p/, /t/, /k/, and /s/, appeared 98.87% of the time, but the remaining 1.13% did not follow the definition. The special sounds /R/, /N/, and /Q/ seem to appear very frequently in Japanese, suggesting that they are not special in terms of frequency counts. The present study further calculated frequencies for the 33 newly and officially listed morae/syllables, which are used particularly for describing alphabetic loanwords. In addition, the top 20 bi-mora frequency combinations are reported. Files of frequency indexes may be downloaded from the Psychonomic Society Web archive at http://www.psychonomic.org/archive/.  相似文献   

13.
In the present study, we raised the question of whether valence information of natural emotional sounds can be extracted rapidly and unintentionally. In a first experiment, we collected explicit valence ratings of brief natural sound segments. Results showed that sound segments of 400 and 600?ms duration—and with some limitation even sound segments as short as 200?ms—are evaluated reliably. In a second experiment, we introduced an auditory version of the affective Simon task to assess automatic (i.e. unintentional and fast) evaluations of sound valence. The pattern of results indicates that affective information of natural emotional sounds can be extracted rapidly (i.e. after a few hundred ms long exposure) and in an unintentional fashion.  相似文献   

14.
On each trial, subjects were played a dichotic pair of syllables differing in the consonant (/ba/, /da/, /ga/) or in the vowel (/ba/, /b?/, /bi/). The pair of syllables was preceded by a melody, or a sentence, and followed by the same or a different melody, or sentence. Subjects either had to retain the first piece of additional material or were free to ignore it. The different combinations of phonemic contrast, additional material, and instruction concerning the additional material were used in different sessions. In each case, the main task of the subjects was to respond to the presence or the absence of the target /ba/ on the ear previously indicated. There was no effect of context on relative ear accuracy, but the right-ear advantage observed for consonants in response latency when subjects retained a sentence gave way to a small nonsignificant left-ear advantage when subjects retained a melody. Right-ear advantage in response latencies was also observed for vowels in the verbal context, but the contextual effect, although in the same direction as for consonants, was very slight. The implications of contextual effects for a theory of the determinants of the auditory laterality effects are discussed.  相似文献   

15.
Tests were carried out on 17 subjects to determine the accuracy of monaural sound localization when the head is not free to turn toward the sound source. Maximum accuracy of localization for a constant-volume sound source coincided with the position for maximum perceived intensity of the sound in the front quadrant. There was a tendency for sounds to be perceived more often as coming from a position directly toward the ear. That is, for sounds in the front quadrant, errors of localization tended to be predominantly clockwise (i.e., biased toward a line directly facing the ear). Errors for sounds occurring in the rear quadrant tended to be anticlockwise. The pinna's differential effect on sound intensity between front and rear quadrants would assist in identifying the direction of movement of objects, for example an insect, passing the ear.  相似文献   

16.
When a listener hears a word (beef), current theories of spoken word recognition posit the activation of both lexical (beef) and sublexical (/b/, /i/, /f/) representations. No lexical representation can be settled on for an unfamiliar utterance (peef). The authors examined the perception of nonwords (peef) as a function of words or nonwords heard 10-20 min earlier. In lexical decision, nonword recognition responses were delayed if a similar word had been heard earlier. In contrast, nonword processing was facilitated by the earlier presentation of a similar nonword (baff-paff). This pattern was observed for both word-initial (beef-peef), and word-final (job-jop) deviation. With the word-in-noise task, real word primes (beef) increased real word intrusions for the target nonword (peef), but only consonant-vowel (CV) or vowel-consonant (VC) intrusions were increased with similar pseudoword primes (baff-paff). The results across tasks and experiments support both a lexical neighborhood view of activation and sublexical representations based on chunks larger than individual phonemes (CV or VC sequences).  相似文献   

17.
Previous research in speech perception has yielded two sets of findings which are brought together in the present study. First, it has been shown that normal hearing listeners use visible as well as acoustical information when processing speech. Second, it has been shown that there is an effect of specific language experience on speech perception such that adults often have difficulty identifying and discriminating non-native phones. The present investigation was designed to extend and combine these two sets of findings. Two studies were conducted using six consonant-vowel syllables (/ba/, /va/, /alpha a/, /da/, /3a/, and /ga/ five of which occur in French and English, and one (the interdental fricative /alpha a/) which occurs only in English. In Experiment 1, an effect of specific linguistic experience was evident for the auditory identification of the non-native interdental stimulus by French-speakers. In Experiment 2, it was shown that the effect of specific language experience extends to the perception of the visible information in speech. These findings are discussed in terms of their implications for our understanding of cross-language processes in speech perception and for our understanding of the development of bimodal speech perception.  相似文献   

18.
Listeners often categorize phonotactically illegal sequences (e.g., /dla/ in English) as phonemically similar legal ones (e.g., /gla/). In an earlier investigation of such an effect in Japanese, Dehaene-Lambertz, Dupoux, and Gout (2000) did not observe a mismatch negativity in response to deviant, illegal sequences, and therefore argued that phonotactics constrain early perceptual processing. In the present study, using a priming paradigm, we compared the event-related potentials elicited by Legal targets (e.g., /gla/) preceded by (1) phonemically distinct Control primes (e.g., /kla/), (2) different tokens of Identity primes (e.g., /gla/), and (3) phonotactically Illegal Test primes (e.g., /dla/). Targets elicited a larger positivity 200–350 ms after onset when preceded by Illegal Test primes or phonemically distinct Control primes, as compared to Identity primes. Later portions of the waveforms (350–600 ms) did not differ for targets preceded by Identity and Illegal Test primes, and the similarity ratings also did not differ in these conditions. These data support a model of speech perception in which veridical representations of phoneme sequences are not only generated during processing, but also are maintained in a manner that affects perceptual processing of subsequent speech sounds.  相似文献   

19.
Stimulus-response compatibility in the programming of speech   总被引:2,自引:0,他引:2  
Subjects chose between sequences of one syllable (e.g.,/gi/vs./bi/), two syllables (e.g.,/gibi/ vs./gubu/), and three syllables (e.g.,/gibidi/ vs. gubudu/), when/i/sequences were signaled by high-pitched tones and/u] sequences were signaled by low-pitched tones (high compatibility), or the reverse (low compatibility). Choice times were additively affected by sequence length and compatibility. A second experiment showed attenuated compatibility effects for sequences with different vowels in the first and second syllables. These results replicate previously reported results for choices between finger sequences, which suggests that the same programming methods are used in both output domains. Evidently, choices between response sequences can be achieved by selecting a distinguishing parameter and assigning it in a serial fashion to partially prepared motor subprograms.  相似文献   

20.
We used eyetracking, perceptual discrimination, and production tasks to examine the influences of perceptual similarity and linguistic experience on word recognition in nonnative (L2) speech. Eye movements to printed words were tracked while German and Dutch learners of English heard words containing one of three pronunciation variants (/t/, /s/, or /f/) of the interdental fricative /θ/. Irrespective of whether the speaker was Dutch or German, looking preferences for target words with /θ/ matched the preferences for producing /s/ variants in German speakers and /t/ variants in Dutch speakers (as determined via the production task), while a control group of English participants showed no such preferences. The perceptually most similar and most confusable /f/ variant (as determined via the discrimination task) was never preferred as a match for /θ/. These results suggest that linguistic experience with L2 pronunciations facilitates recognition of variants in an L2, with effects of frequency outweighing effects of perceptual similarity.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号