首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 0 毫秒
1.
Ramus F  Nespor M  Mehler J 《Cognition》2000,75(1):AD3-AD30
Spoken languages have been classified by linguists according to their rhythmic properties, and psycholinguists have relied on this classification to account for infants' capacity to discriminate languages. Although researchers have measured many speech signal properties, they have failed to identify reliable acoustic characteristics for language classes. This paper presents instrumental measurements based on a consonant/vowel segmentation for eight languages. The measurements suggest that intuitive rhythm types reflect specific phonological properties, which in turn are signaled by the acoustic/phonetic properties of speech. The data support the notion of rhythm classes and also allow the simulation of infant language discrimination, consistent with the hypothesis that newborns rely on a coarse segmentation of speech. A hypothesis is proposed regarding the role of rhythm perception in language acquisition.  相似文献   

2.
言语理解是听者接受外部语音输入并且获得意义的心理过程。日常交流中,听觉言语理解受多尺度节律信息的影响,常见有韵律结构节律、语境节律、和说话者身体语言节律三方面外部节律。它们改变听者在言语理解中的音素判别、词汇感知以及言语可懂度等过程。内部节律表现为大脑内神经振荡,其能够表征外部言语输入在不同时间尺度下的层级特征。外部节律性刺激与内部神经活动的神经夹带能够优化大脑对言语刺激的处理,并受到听者自上而下的认知过程的调节进一步增强目标言语的内在表征。我们认为它可能是实现内外节律相互联系并共同影响言语理解的关键机制。对内外节律及其联系机制的揭示能够为理解言语这种在多层级时间尺度上具有结构规律的复杂序列提供了一个研究窗口。  相似文献   

3.
The ability to appropriately reciprocate or compensate a partner's communicative response represents an essential element of communicative competence. Previous research indicates that as children grow older, their speech levels reflect greater adaptation relative to their partner's speech. In this study, we argue that patterns of adaptation are related to specific linguistic and pragmatic abilities, such as verbal responsiveness, involvement in the interaction, and the production of relatively complex syntactic structures. Thirty-seven children (3–6 years of age) individually interacted with an adult for 20 to 30 minutes. Adaptation between child and adult was examined among conversational floortime, response latency, and speech rate. Three conclusions were drawn from the results of this investigation. First, by applying time-series analysis to the interactants' speech behaviors within each dyad, individual measures of the child's adaptations to the adult's speech can be generated. Second, consistent with findings in the adult domain, these children generally reciprocated changes in the adult's speech rate and response latency. Third, there were differences in degree and type of adaptation within specific dyads. Chronological age was not useful in accounting for this individual variation, but specific linguistic and social abilities were. Implications of these findings for the development of communicative competence and for the study of normal versus language-delayed speech were discussed.  相似文献   

4.
The use of rhythm in attending to speech   总被引:1,自引:0,他引:1  
Three experiments examined attentional allocation during speech processing to determine whether listeners capitalize on the rhythmic nature of speech and attend more closely to stressed than to unstressed syllables. Ss performed a phoneme monitoring task in which the target phoneme occurred on a syllable that was either predicted to be stressed or unstressed by the context preceding the target word. Stimuli were digitally edited to eliminate the local acoustic correlates of stress. A sentential context and a context composed of word lists, in which all the words had the same stress pattern, were used. In both cases, the results suggest that attention may be preferentially allocated to stressed syllables during speech processing. However, a normal sentence context may not provide strong predictive cues to lexical stress, limiting the use of the attentional focus.  相似文献   

5.
The role of rhythm in the speech intelligibility of 18 hearing-impaired children, aged 15 years with hearing losses from 40 to 108 db, was investigated. Their perceptual judgement of visual rhythm sequences was superior to that of the hearing controls, but their scores were not correlated with their speech intelligibility.  相似文献   

6.
Across all languages studied to date, audiovisual speech exhibits a consistent rhythmic structure. This rhythm is critical to speech perception. Some have suggested that the speech rhythm evolved de novo in humans. An alternative account--the one we explored here--is that the rhythm of speech evolved through the modification of rhythmic facial expressions. We tested this idea by investigating the structure and development of macaque monkey lipsmacks and found that their developmental trajectory is strikingly similar to the one that leads from human infant babbling to adult speech. Specifically, we show that: (1) younger monkeys produce slower, more variable mouth movements and as they get older, these movements become faster and less variable; and (2) this developmental pattern does not occur for another cyclical mouth movement--chewing. These patterns parallel human developmental patterns for speech and chewing. They suggest that, in both species, the two types of rhythmic mouth movements use different underlying neural circuits that develop in different ways. Ultimately, both lipsmacking and speech converge on a ~5 Hz rhythm that represents the frequency that characterizes the speech rhythm of human adults. We conclude that monkey lipsmacking and human speech share a homologous developmental mechanism, lending strong empirical support to the idea that the human speech rhythm evolved from the rhythmic facial expressions of our primate ancestors.  相似文献   

7.
8.
In this study, we examined the viability of measuring personality using computerized lexical analysis of natural speech. Two well-validated models of personality were measured, one involving trait positive affectivity (PA) and negative affectivity (NA) dimensions and the other involving a separate behavioral inhibition motivational system (BIS) and a behavioral activation motivational system (BAS). Individuals with high levels of trait PA and sensitive BAS expressed high levels of positive emotion in their natural speech, whereas individuals with high levels of trait NA and sensitive BIS tended to express high levels of negative emotion. The personality variables accounted for almost a quarter of the variance in emotional expressivity.  相似文献   

9.
10.
Tilsen S 《Cognitive Science》2009,33(5):839-879
Temporal patterns in human movement, and in speech in particular, occur on multiple timescales. Regularities in such patterns have been observed between speech gestures, which are relatively quick movements of articulators (e.g., tongue fronting and lip protrusion), and also between rhythmic units (e.g., syllables and metrical feet), which occur more slowly. Previous work has shown that patterns in both domains can be usefully modeled with oscillatory dynamical systems. To investigate how rhythmic and gestural domains interact, an experiment was conducted in which speakers performed a phrase repetition task, and gestural kinematics were recorded using electromagnetic articulometry. Variance in relative timing of gestural movements was correlated with variance in rhythmic timing, indicating that gestural and rhythmic systems interact in the process of planning and producing speech. A model of rhythmic and gestural planning oscillators with multifrequency coupling is presented, which can simulate the observed covariability between rhythmic and gestural timing.  相似文献   

11.
Some languages create the impression of being stress timed. Claims have been made that this timing of stressed syllables enables the listener to predict the future locations of informative parts later in a sentence. The fact that phoneme monitoring is delayed when targets in a spoken sentence are displaced has been taken as supporting this claim (Meltzer, Martin, Bergfeld Mills, Imhoff and Zohar, 1976). In the present study temporal displacement was induced without introducing phonetic discontinuities. In Dutch sentences a word just in advance of a target-bearing word was replaced by another one differing in length. Results show that the temporal displacement per se did not have any effect on phoneme-monitoring reaction times. Implications for a theory of fpeech processing are discussed.  相似文献   

12.
Performance on selective-attention and divided-attention tasks shows strong and consistent interactions when participants rapidly classify auditory stimuli whose linguistic and perceptual dimensions (the words low vs. high, low and high pitch, low and high position in space) share common labels. Compared with baseline performance, response times were greater when one or two irrelevant dimensions varied (Garner interference) and when combinations of attributes were incongruent rather than congruent (congruence effects). Performance depended only on the congruence relationships between the relevant dimension and each of the irrelevant dimensions and not on the congruence relationships between the irrelevant dimensions themselves. In selective attention, an additive multidimensional model accounts well for the patterns of both Garner interference and congruence effects.  相似文献   

13.
14.

Three experiments investigated listeners’ ability to use speech rhythm to attend selectively to a single target talker presented in multi-talker babble (Experiments 1 and 2) and in speech-shaped noise (Experiment 3). Participants listened to spoken sentences of the form “Ready [Call sign] go to [Color] [Number] now” and reported the Color and Number spoken by a target talker (cued by the Call sign “Baron”). Experiment 1 altered the natural rhythm of the target talker and background talkers for two-talker and six-talker backgrounds. Experiment 2 considered parametric rhythm alterations over a wider range, altering the rhythm of either the target or the background talkers. Experiments 1 and 2 revealed that altering the rhythm of the target talker, while keeping the rhythm of the background intact, reduced listeners’ ability to report the Color and Number spoken by the target talker. Conversely, altering the rhythm of the background talkers, while keeping the target rhythm intact, improved listeners ability to report the Color and Number spoken by the target talker. Experiment 3, which embedded the target talker in speech-shaped noise rather than multi-talker babble, similarly reduced recognition of the target sentence with increased alteration of the target rhythm. This pattern of results favors a dynamic-attending theory-based selective-entrainment hypothesis over a disparity-based segregation hypothesis and an increased salience hypothesis.

  相似文献   

15.
Vocal babbling involves production of rhythmic sequences of a mouth close–open alternation giving the perceptual impression of a sequence of consonant–vowel syllables. Petitto and co-workers have argued vocal babbling rhythm is the same as manual syllabic babbling rhythm, in that it has a frequency of 1 cycle per second. They also assert that adult speech and sign language display the same frequency. However, available evidence suggests that the vocal babbling frequency approximates 3 cycles per second. Both adult spoken language and sign language show higher frequencies than babbling in their respective modalities. No information is currently available on the basic rhythmic parameter of intercyclical variability in either modality. A study of reduplicative babbling by 4 infants and 4 adults producing reduplicated syllables confirms the 3 per second vocal babbling rate, as well as a faster rate in adults, and provides new information on intercyclical variability.  相似文献   

16.
Sussman HM  Fruchter D  Hilbert J  Sirosh J 《The Behavioral and brain sciences》1998,21(2):241-59; discussion 260-99
Neuroethological investigations of mammalian and avian auditory systems have documented species-specific specializations for processing complex acoustic signals that could, if viewed in abstract terms, have an intriguing and striking relevance for human speech sound categorization and representation. Each species forms biologically relevant categories based on combinatorial analysis of information-bearing parameters within the complex input signal. This target article uses known neural models from the mustached bat and barn owl to develop, by analogy, a conceptualization of human processing of consonant plus vowel sequences that offers a partial solution to the noninvariance dilemma--the nontransparent relationship between the acoustic waveform and the phonetic segment. Critical input sound parameters used to establish species-specific categories in the mustached bat and barn owl exhibit high correlation and linearity due to physical laws. A cue long known to be relevant to the perception of stop place of articulation is the second formant (F2) transition. This article describes an empirical phenomenon--the locus equations--that describes the relationship between the F2 of a vowel and the F2 measured at the onset of a consonant-vowel (CV) transition. These variables, F2 onset and F2 vowel within a given place category, are consistently and robustly linearly correlated across diverse speakers and languages, and even under perturbation conditions as imposed by bite blocks. A functional role for this category-level extreme correlation and linearity (the "orderly output constraint") is hypothesized based on the notion of an evolutionarily conserved auditory-processing strategy. High correlation and linearity between critical parameters in the speech signal that help to cue place of articulation categories might have evolved to satisfy a preadaptation by mammalian auditory systems for representing tightly correlated, linearly related components of acoustic signals.  相似文献   

17.
In the present experiment, the authors tested Mandarin and English listeners on a range of auditory tasks to investigate whether long-term linguistic experience influences the cognitive processing of nonspeech sounds. As expected, Mandarin listeners identified Mandarin tones significantly more accurately than English listeners; however, performance did not differ across the listener groups on a pitch discrimination task requiring fine-grained discrimination of simple nonspeech sounds. The crucial finding was that cross-language differences emerged on a nonspeech pitch contour identification task: The Mandarin listeners more often misidentified flat and falling pitch contours than the English listeners in a manner that could be related to specific features of the sound structure of Mandarin, which suggests that the effect of linguistic experience extends to nonspeech processing under certain stimulus and task conditions.  相似文献   

18.
19.
The “McGurk effect” demonstrates that visual (lip-read) information is used during speech perception even when it is discrepant with auditory information. While this has been established as a robust effect in subjects from Western cultures, our own earlier results had suggested that Japanese subjects use visual information much less than American subjects do (Sekiyama & Tohkura, 1993). The present study examined whether Chinese subjects would also show a reduced McGurk effect due to their cultural similarities with the Japanese. The subjects were 14 native speakers of Chinese living in Japan. Stimuli consisted of 10 syllables (/ba/, /pa/, /ma/, /wa/, /da/, /ta/, /na/, /ga/, /ka/, /ra/ ) pronounced by two speakers, one Japanese and one American. Each auditory syllable was dubbed onto every visual syllable within one speaker, resulting in 100 audiovisual stimuli in each language. The subjects’ main task was to report what they thought they had heard while looking at and listening to the speaker while the stimuli were being uttered. Compared with previous results obtained with American subjects, the Chinese subjects showed a weaker McGurk effect. The results also showed that the magnitude of the McGurk effect depends on the length of time the Chinese subjects had lived in Japan. Factors that foster and alter the Chinese subjects’ reliance on auditory information are discussed.  相似文献   

20.
The development of reflex theory in its Pavlovian interpretation had significant resonance in a wide range of academic research areas. Its impact on the so-called humanities was, perhaps, no less than the effect it had in medical science. The idea of the conditioned reflex suggesting a physiological explanation of behaviour patterns received a particularly warm welcome in philosophy and psychology as it provided a scientifically-based tool for a conceptual u-turn towards objectivism. This article looks into the ways these ideas contributed to the formation of the Soviet language theory, namely, to the sociological interpretation of language development and speech production presented in the pioneering works of Sergej M. Dobrogaev (1873–1952).
Katya ChownEmail:
  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号