首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
In the past, the nature of the compositional units proposed for spoken language has largely diverged from the types of control units pursued in the domains of other skilled motor tasks. A classic source of evidence as to the units structuring speech has been patterns observed in speech errors--"slips of the tongue". The present study reports, for the first time, on kinematic data from tongue and lip movements during speech errors elicited in the laboratory using a repetition task. Our data are consistent with the hypothesis that speech production results from the assembly of dynamically defined action units--gestures--in a linguistically structured environment. The experimental results support both the presence of gestural units and the dynamical properties of these units and their coordination. This study of speech articulation shows that it is possible to develop a principled account of spoken language within a more general theory of action.  相似文献   

2.
Results from different empirical investigations on gestural aspects of timed rhythmic movements indicate that the production of asymmetric movement trajectories is a feature that seems to be a common characteristic of various performances of repetitive rhythmic patterns. The behavioural or neural origin of these asymmetrical trajectories is, however, not identified. In the present study we outline a theoretical model that is capable of producing syntheses of asymmetric movement trajectories documented in empirical investigations by Balasubramaniam et al. (2004). Characteristic qualities of the extension/flexion profiles in the observed asymmetric trajectories are reproduced, and we conduct an experiment similar to Balasubramaniam et al. (2004) to show that the empirically documented movement trajectories and our modelled approximations share the same spectral components. The model is based on an application of frequency modulated movements, and a theoretical interpretation offered by the model is to view paced rhythmic movements as a result of an unpaced movement being “stretched” and “compressed”, caused by the presence of a metronome. We discuss our model construction within the framework of event-based and emergent timing, and argue that a change between these timing modes might be reflected by the strength of the modulation in our model.  相似文献   

3.
We investigated the extent to which people can discriminate between languages on the basis of their characteristic temporal, rhythmic information, and the extent to which this ability generalizes across sensory modalities. We used rhythmical patterns derived from the alternation of vowels and consonants in English and Japanese, presented in audition, vision, both audition and vision at the same time, or touch. Experiment 1 confirmed that discrimination is possible on the basis of auditory rhythmic patterns, and extended it to the case of vision, using ‘aperture-close’ mouth movements of a schematic face. In Experiment 2, language discrimination was demonstrated using visual and auditory materials that did not resemble spoken articulation. In a combined analysis including data from Experiments 1 and 2, a beneficial effect was also found when the auditory rhythmic information was available to participants. Despite the fact that discrimination could be achieved using vision alone, auditory performance was nevertheless better. In a final experiment, we demonstrate that the rhythm of speech can also be discriminated successfully by means of vibrotactile patterns delivered to the fingertip. The results of the present study therefore demonstrate that discrimination between language's syllabic rhythmic patterns is possible on the basis of visual and tactile displays.  相似文献   

4.
Erin E. Hannon 《Cognition》2009,111(3):403-409
Recent evidence suggests that the musical rhythm of a particular culture may parallel the speech rhythm of that culture’s language (Patel, A. D., & Daniele, J. R. (2003). An empirical comparison of rhythm in language and music. Cognition, 87, B35-B45). The present experiments aimed to determine whether listeners actually perceive such rhythmic differences in a purely musical context (i.e., in instrumental music without words). In Experiment 1a, listeners successfully classified instrumental renditions of French and English songs having highly contrastive rhythmic differences. Experiment 1b replicated this result with the same songs containing rhythmic information only. In Experiments 2a and 2b, listeners successfully classified original and rhythm-only stimuli when language-specific rhythmic differences were less contrastive but more representative of differences found in actual music and speech. These findings indicate that listeners can use rhythmic similarities and differences to classify songs originally composed in two languages having contrasting rhythmic prosody.  相似文献   

5.
In accord with a proposed innate link between speech perception and production (e.g., motor theory), this study provides compelling evidence for the inhibition of stuttering events in people who stutter prior to the initiation of the intended speech act, via both the perception and the production of speech gestures. Stuttering frequency during reading was reduced in 10 adults who stutter by approximately 40% in three of four experimental conditions: (1) following passive audiovisual presentation (i.e., viewing and hearing) of another person producing pseudostuttering (stutter-like syllabic repetitions) and following active shadowing of both (2) pseudostuttered and (3) fluent speech. Stuttering was not inhibited during reading following passive audiovisual presentation of fluent speech. Syllabic repetitions can inhibit stuttering both when produced and when perceived, and we suggest that these elementary stuttering forms may serve as compensatory speech gestures for releasing involuntary stuttering blocks by engaging mirror neuronal systems that are predisposed for fluent gestural imitation.  相似文献   

6.
Simko J  Cummins F 《Psychological review》2010,117(4):1229-1246
Movement science faces the challenge of reconciling parallel sequences of discrete behavioral goals with observed fluid, context-sensitive motion. This challenge arises with a vengeance in the speech domain, in which gestural primitives play the role of discrete goals. The task dynamic framework has proved effective in modeling the manner in which the gestural primitives of articulatory phonology can result in smooth, biologically plausible, movement of model articulators. We present a variant of the task dynamic model with 1 significant innovation: Tasks are not abstract and context free but are embodied and tied to specific effectors. An advantage of this approach is that it allows the definition of a parametric cost function that can be optimized. Optimization generates gestural scores in which the relative timing of gestures is fully specified. We demonstrate that movements generated in an optimal manner are phonetically plausible. Highly nuanced movement trajectories are emergent based on relatively simple optimality criteria. This addresses a long-standing need within this theoretical framework and provides a rich modeling foundation for subsequent work.  相似文献   

7.
Mitterer H  Ernestus M 《Cognition》2008,109(1):168-173
This study reports a shadowing experiment, in which one has to repeat a speech stimulus as fast as possible. We tested claims about a direct link between perception and production based on speech gestures, and obtained two types of counterevidence. First, shadowing is not slowed down by a gestural mismatch between stimulus and response. Second, phonetic detail is more likely to be imitated in a shadowing task if it is phonologically relevant. This is consistent with the idea that speech perception and speech production are only loosely coupled, on an abstract phonological level.  相似文献   

8.
Across all languages studied to date, audiovisual speech exhibits a consistent rhythmic structure. This rhythm is critical to speech perception. Some have suggested that the speech rhythm evolved de novo in humans. An alternative account--the one we explored here--is that the rhythm of speech evolved through the modification of rhythmic facial expressions. We tested this idea by investigating the structure and development of macaque monkey lipsmacks and found that their developmental trajectory is strikingly similar to the one that leads from human infant babbling to adult speech. Specifically, we show that: (1) younger monkeys produce slower, more variable mouth movements and as they get older, these movements become faster and less variable; and (2) this developmental pattern does not occur for another cyclical mouth movement--chewing. These patterns parallel human developmental patterns for speech and chewing. They suggest that, in both species, the two types of rhythmic mouth movements use different underlying neural circuits that develop in different ways. Ultimately, both lipsmacking and speech converge on a ~5 Hz rhythm that represents the frequency that characterizes the speech rhythm of human adults. We conclude that monkey lipsmacking and human speech share a homologous developmental mechanism, lending strong empirical support to the idea that the human speech rhythm evolved from the rhythmic facial expressions of our primate ancestors.  相似文献   

9.
People have remarkable difficulty generating two responses that must follow different temporal sequences, unless the temporal patterns are simply related (e.g., periods in 2:1, 3:1 relation). For example, it is hard to tap to two conflicting rhythms presented concurrently (i.e., a polyrhythm) using the right and left hands (Klapp, 1979), or to tap while articulating a conflicting speech utterance (Klapp, 1981). The present experiments indicate that difficulties in processing conflicting rhythms occur even when people must (a) merely monitor the stimuli and indicate the termination of one rhythmic sequence or (b) tap with a single hand. Responding to polyrhythms is thus difficult even without multiple limb coordination. Furthermore, the difficulty of two-handed tapping to polyrhythms that involve two different tones was found to decrease as the pitch difference between the tones was decreased. This result indicates that the difficulty of rhythmic coordination can be perceptually manipulated in a striking fashion. Polyrhythmic performance thus provides an excellent opportunity for examining possible interactions of perceptual and motor organizations.  相似文献   

10.
Gestural beats are typically small up and down or back and forth flicks of one or both hands. It has been assumed by researchers hypothesizing about the interaction of beats and speech that beats coincide with verbal stress. Microanalysis shows that gestural beats are organized in rhythmic patterns and do not necessarily cooccur with stressed syllables as previously assumed. Tone group nuclei appear to function as gestational points for rhythmic groups, which supports the theory that thinking utilizes words as cognitive tools and provides evidence that in some cases entire tone units are formed in advance. Evidence of an interpersonal gestural rhythm is also presented.This research was supported by grants from the National Science Foundation and from the American Association of University Women Educational Foundation.  相似文献   

11.
PurposeAdults who stutter speak more fluently during choral speech contexts than they do during solo speech contexts. The underlying mechanisms for this effect remain unclear, however. In this study, we examined the extent to which the choral speech effect depended on presentation of intact temporal speech cues. We also examined whether speakers who stutter followed choral signals more closely than typical speakers did.Method8 adults who stuttered and 8 adults who did not stutter read 60 sentences aloud during a solo speaking condition and three choral speaking conditions (240 total sentences), two of which featured either temporally altered or indeterminate word duration patterns. Effects of these manipulations on speech fluency, rate, and temporal entrainment with the choral speech signal were assessed.ResultsAdults who stutter spoke more fluently in all choral speaking conditions than they did when speaking solo. They also spoke slower and exhibited closer temporal entrainment with the choral signal during the mid- to late-stages of sentence production than the adults who did not stutter. Both groups entrained more closely with unaltered choral signals than they did with altered choral signals.ConclusionsFindings suggest that adults who stutter make greater use of speech-related information in choral signals when talking than adults with typical fluency do. The presence of fluency facilitation during temporally altered choral speech and conversation babble, however, suggests that temporal/gestural cueing alone cannot account for fluency facilitation in speakers who stutter. Other potential fluency enhancing mechanisms are discussed.Educational Objectives: The reader will be able to (a) summarize competing views on stuttering as a speech timing disorder, (b) describe the extent to which adults who stutter depend on an accurate rendering of temporal information in order to benefit from choral speech, and (c) discuss possible explanations for fluency facilitation in the presence of inaccurate or indeterminate temporal cues.  相似文献   

12.
Intentional and attentional dynamics of speech-hand coordination   总被引:1,自引:0,他引:1  
Interest is rapidly growing in the hypothesis that natural language emerged from a more primitive set of linguistic acts based primarily on manual activity and hand gestures. Increasingly, researchers are investigating how hemispheric asymmetries are related to attentional and manual asymmetries (i.e., handedness). Both speech perception and production have origins in the dynamical generative movements of the vocal tract known as articulatory gestures. Thus, the notion of a "gesture" can be extended to both hand movements and speech articulation. The generative actions of the hands and vocal tract can therefore provide a basis for the (direct) perception of linguistic acts. Such gestures are best described using the methods of dynamical systems analysis since both perception and production can be described using the same commensurate language. Experiments were conducted using a phase transition paradigm to examine the coordination of speech-hand gestures in both left- and right-handed individuals. Results address coordination (in-phase vs. anti-phase), hand (left vs. right), lateralization (left vs. right hemisphere), focus of attention (speech vs. tapping), and how dynamical constraints provide a foundation for human communicative acts. Predictions from the asymmetric HKB equation confirm the attentional basis of functional asymmetry. Of significance is a new understanding of the role of perceived synchrony (p-centres) during intentional cases of gestural coordination.  相似文献   

13.
《Brain and cognition》2013,81(3):329-336
Humans perceive a wide range of temporal patterns, including those rhythms that occur in music, speech, and movement; however, there are constraints on the rhythmic patterns that we can represent. Past research has shown that sequences in which sounds occur regularly at non-metrical locations in a repeating beat period (non-integer ratio subdivisions of the beat, e.g. sounds at 430 ms in a 1000 ms beat) are represented less accurately than sequences with metrical relationships, where events occur at even subdivisions of the beat (integer ratios, e.g. sounds at 500 ms in a 1000 ms beat). Why do non-integer ratio rhythms present cognitive challenges? An emerging theory is that non-integer ratio sequences are represented incorrectly, “regularized” in the direction of the nearest metrical pattern, and the present study sought evidence of such perceptual regularization toward integer ratio relationships. Participants listened to metrical and non-metrical rhythmic auditory sequences during electroencephalogram recording, and sounds were pseudorandomly omitted from the stimulus sequence. Cortical responses to these omissions (omission elicited potentials; OEPs) were used to estimate the timing of expectations for omitted sounds in integer ratio and non-integer ratio locations. OEP amplitude and onset latency measures indicated that expectations for non-integer ratio sequences are distorted toward the nearest metrical location in the rhythmic period. These top-down effects demonstrate metrical regularization in a purely perceptual context, and provide support for dynamical accounts of rhythm perception.  相似文献   

14.
A group of individuals conversing in natural dyads and a group of lecturers were observed for lateral hand movement patterns during speech. Right-handed individuals in both groups displayed a significant right hand bias for gesture movements but no lateral bias for self-touching movements. The study provides external validity for previous laboratory studies of lateralized hand gesture. The results were interpreted as evidence of a central processor for both spoken and gestural communication.  相似文献   

15.
A group of individuals conversing in natural dyads and a group of lecturers were observed for lateral hand movement patterns during speech. Right-handed individuals in both groups displayed a significant right hand bias for gesture movements but no lateral bias for self-touching movements. The study provides external validity for previous laboratory studies of lateralized hand gesture. The results were interpreted as evidence of a central processor for both spoken and gestural communication.  相似文献   

16.
We compared young and older adults’ speech during an error detection task, with some pictures containing visual errors and anomalies and other pictures error-free. We analyzed three disfluency types: mid-phrase speech fillers (e.g., It’s a little, um, girl), repetitions (e.g., He’s trying to catch the- the birds), and repairs (e.g., She- you can see her legs). Older adults produced more mid-phrase fillers than young adults only when describing pictures containing errors. These often reflect word retrieval problems and represent clear disruptions to fluency, so this interaction indicates that the need to form and maintain representations of novel information can specifically compromise older adults’ speech fluency. Overall, older adults produced more repetitions and repairs than young adults, regardless of picture type, indicating general age-related increases in these disfluencies. The obtained patterns are discussed in the context of the Transmission Deficit Hypothesis and other approaches to age-related changes in speech fluency.  相似文献   

17.
Gesture–speech synchrony re‐stabilizes when hand movement or speech is disrupted by a delayed feedback manipulation, suggesting strong bidirectional coupling between gesture and speech. Yet it has also been argued from case studies in perceptual–motor pathology that hand gestures are a special kind of action that does not require closed‐loop re‐afferent feedback to maintain synchrony with speech. In the current pre‐registered within‐subject study, we used motion tracking to conceptually replicate McNeill's ( 1992 ) classic study on gesture–speech synchrony under normal and 150 ms delayed auditory feedback of speech conditions (NO DAF vs. DAF). Consistent with, and extending McNeill's original results, we obtain evidence that (a) gesture‐speech synchrony is more stable under DAF versus NO DAF (i.e., increased coupling effect), (b) that gesture and speech variably entrain to the external auditory delay as indicated by a consistent shift in gesture‐speech synchrony offsets (i.e., entrainment effect), and (c) that the coupling effect and the entrainment effect are co‐dependent. We suggest, therefore, that gesture–speech synchrony provides a way for the cognitive system to stabilize rhythmic activity under interfering conditions.  相似文献   

18.
Two models have been suggested to depict the relationship between disorders of limb and orofacial praxis. The first views apraxia as a unitary disorder in which the underlying mechanisms for each type are similar, while the second model suggests that there are two separate praxis systems: one for planning and controlling limb gestures and a second one for planning and controlling orofacial movements. The purpose of this study was to investigate whether a common mechanism may underlie deficits in limb and orofacial praxis in children. This was done by analyzing the types of praxis errors demonstrated by children with developmental motor deficits and normal controls when performing limb and orofacial gestures. Results indicated that there was consistency across modalities (i.e., limb, orofacial) in the types of praxis errors made by children with motor deficits, providing support for the idea that a common mechanism may underlie disruptions to limb and orofacial praxis in children. This study also examined developmental trends in gestural representation and in types of praxis errors. The findings revealed a striking developmental maturation in gestural ability between the ages of 6 and 11 years for all children. However, over this age range, children with developmental motor deficits were impaired relative to normal controls.  相似文献   

19.
The authors hypothesized that the modulation of coordinative stability and accuracy caused by the coalition of egocentric (neuromuscular) and allocentric (directional) constraints varies depending on the plane of motion in which coordination patterns are performed. Participants (N = 7) produced rhythmic bimanual movements of the hands in the sagittal plane (i.e., up-and-down oscillations resulting from flexion-extension of their wrists). The timing of activation of muscle groups, direction of movements, visual feedback, and across-trial movement frequency were manipulated. Results showed that both the egocentric and the allocentric constraints modulated pattern stability and accuracy. However, the allocentric constraint played a dominant role over the egocentric. The removal of vision only slightly destabilized movements, regardless of the effects of directional and (neuro)muscular constraints. The results of the present study hint at considering the plane in which coordination is performed as a mediator of the coalition of egocentric and allocentric constraints that modulates coordinative stability of rhythmic bimanual coordination.  相似文献   

20.
A model of gestural sequencing in speech is proposed that aspires to producing biologically plausible fluent and efficient movement in generating an utterance. We have previously proposed a modification of the well‐known task dynamic implementation of articulatory phonology such that any given articulatory movement can be associated with a quantification of effort ( Simko & Cummins, 2010 ). To this we add a quantitative cost that decreases as speech gestures become more precise, and hence intelligible, and a third cost component that places a premium on the duration of an utterance. Together, these three cost elements allow us to derive algorithmically optimal sequences of gestures and dynamical parameters for generating articulator movement. We show that the optimized movement displays many timing characteristics that are representative of real speech movement, capturing subtle details of relative timing between gestures. Optimal movement sequences also display invariances in timing that suggest syllable‐level coordination for CV sequences. We explore the behavior of the model as prosodic context is manipulated in two dimensions: clarity of articulation and speech rate. Smooth, fluid, and efficient movements result.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号