首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 498 毫秒
1.
2.
Mildner V 《Brain and cognition》2000,43(1-3):345-349
Interference between the manual and the verbal performance on two types of concurrent verbal-manual tasks was studied on a sample of 48 female right-handers. The more complex verbal task (storytelling) affected both hands significantly, the less complex (essentially phonemic) task affected only the right hand, with insignificant negative influence on the left-hand performance. No significant reciprocal effects of the motor task on verbalization were found.  相似文献   

3.
The present study examined whether infant-directed (ID) speech facilitates intersensory matching of audio–visual fluent speech in 12-month-old infants. German-learning infants’ audio–visual matching ability of German and French fluent speech was assessed by using a variant of the intermodal matching procedure, with auditory and visual speech information presented sequentially. In Experiment 1, the sentences were spoken in an adult-directed (AD) manner. Results showed that 12-month-old infants did not exhibit a matching performance for the native, nor for the non-native language. However, Experiment 2 revealed that when ID speech stimuli were used, infants did perceive the relation between auditory and visual speech attributes, but only in response to their native language. Thus, the findings suggest that ID speech might have an influence on the intersensory perception of fluent speech and shed further light on multisensory perceptual narrowing.  相似文献   

4.
Forty bilinguals from several language backgrounds were contrasted to a group of English-speaking monolinguals on a verbal-manual interference paradigm. For the monolinguals, concurrent finger-tapping rate during speech output tasks was disrupted only for the right hand, indicating left-hemisphere language dominance. Bilingual laterality patterns were a function of language used: native (L1) versus second acquired (L2), and age of L2 acquisition. Early bilinguals (L1 + L2 acquisition prior to age 6) revealed left-hemisphere dominance for both languages, whereas late bilinguals (L2 acquired beyond age 6) revealed left-hemisphere dominance only for L1 and symmetrical hemispheric involvement for L2.  相似文献   

5.
Following findings that musical rhythmic priming enhances subsequent speech perception, we investigated whether rhythmic priming for spoken sentences can enhance phonological processing – the building blocks of speech – and whether audio–motor training enhances this effect. Participants heard a metrical prime followed by a sentence (with a matching/mismatching prosodic structure), for which they performed a phoneme detection task. Behavioural (RT) data was collected from two groups: one who received audio–motor training, and one who did not. We hypothesised that 1) phonological processing would be enhanced in matching conditions, and 2) audio–motor training with the musical rhythms would enhance this effect. Indeed, providing a matching rhythmic prime context resulted in faster phoneme detection, thus revealing a cross-domain effect of musical rhythm on phonological processing. In addition, our results indicate that rhythmic audio–motor training enhances this priming effect. These results have important implications for rhythm-based speech therapies, and suggest that metrical rhythm in music and speech may rely on shared temporal processing brain resources.  相似文献   

6.
Norris et al. recently reported experimental evidence that listeners learn phoneme categories in response to lexical feedback. To reconcile these findings with their modular account of speech perception, the authors argue that top-down feedback can be used to support phoneme learning, but not to influence on-line phonemic processing. We suggest that these findings have broader implications than the authors assume, and we discuss potential challenges for integrating a modular theory with top-down learning.  相似文献   

7.
Infant directed speech (IDS) is a speech register characterized by simpler sentences, a slower rate, and more variable prosody. Recent work has implicated it in more subtle aspects of language development. Kuhl et al. (1997) demonstrated that segmental cues for vowels are affected by IDS in a way that may enhance development: the average locations of the extreme “point” vowels (/a/, /i/ and /u/) are further apart in acoustic space. If infants learn speech categories, in part, from the statistical distributions of such cues, these changes may specifically enhance speech category learning. We revisited this by asking (1) if these findings extend to a new cue (Voice Onset Time, a cue for voicing); (2) whether they extend to the interior vowels which are much harder to learn and/or discriminate; and (3) whether these changes may be an unintended phonetic consequence of factors like speaking rate or prosodic changes associated with IDS. Eighteen caregivers were recorded reading a picture book including minimal pairs for voicing (e.g., beach/peach) and a variety of vowels to either an adult or their infant. Acoustic measurements suggested that VOT was different in IDS, but not in a way that necessarily supports better development, and that these changes are almost entirely due to slower rate of speech of IDS. Measurements of the vowel suggested that in addition to changes in the mean, there was also an increase in variance, and statistical modeling suggests that this may counteract the benefit of any expansion of the vowel space. As a whole this suggests that changes in segmental cues associated with IDS may be an unintended by-product of the slower rate of speech and different prosodic structure, and do not necessarily derive from a motivation to enhance development.  相似文献   

8.
9.
The processing costs involved in regional accent normalization were evaluated by measuring differences in lexical decision latencies for targets placed at the end of sentences with different French regional accents. Over a series of 6 experiments, the authors examined the time course of comprehension disruption by manipulating the duration and presentation conditions of accented speech. Taken together, the findings of these experiments indicate that regional accent normalization involves a short-term adjustment mechanism that develops as a certain amount of accented signal is available, resulting in a temporary perturbation in speech processing.  相似文献   

10.
Speech alignment, or the tendency of individuals to subtly imitate each other’s speaking styles, is often assessed by comparing a subject’s baseline and shadowed utterances to a model’s utterances, often through perceptual ratings. These types of comparisons provide information about the occurrence of a change in subject’s speech, but they do not indicate that this change is toward the specific shadowed model. In three experiments, we investigated whether alignment is specific to a shadowed model. Experiment 1 involved the classic baseline-to-shadowed comparison, to confirm that subjects did, in fact, sound more like their model when they shadowed, relative to any preexisting similarities between a subject and a model. Experiment 2 tested whether subjects’ utterances sounded more similar to the model whom they had shadowed or to another, unshadowed model. In Experiment 3, we examined whether subjects’ utterances sounded more similar to the model whom they had shadowed or to another subject who had shadowed a different model. The results of all experiments revealed that subjects sounded more similar to the model whom they had shadowed. This suggests that shadowing-based speech alignment is not just a change, but a change in the direction of the shadowed model, specifically.  相似文献   

11.
12.
We present an experiment in which we explored the extent to which visual speech information affects learners’ ability to segment words from a fluent speech stream. Learners were presented with a set of sentences consisting of novel words, in which the only cues to the location of word boundaries were the transitional probabilities between syllables. They were exposed to this language through the auditory modality only, through the visual modality only (where the learners saw the speaker producing the sentences but did not hear anything), or through both the auditory and visual modalities. The learners were successful at segmenting words from the speech stream under all three training conditions. These data suggest that visual speech information has a positive effect on word segmentation performance, at least under some circumstances.  相似文献   

13.
14.
Lexical information facilitates speech perception, especially when sounds are ambiguous or degraded. The interactive approach to understanding this effect posits that this facilitation is accomplished through bi-directional flow of information, allowing lexical knowledge to influence pre-lexical processes. Alternative autonomous theories posit feed-forward processing with lexical influence restricted to post-perceptual decision processes. We review evidence supporting the prediction of interactive models that lexical influences can affect pre-lexical mechanisms, triggering compensation, adaptation and retuning of phonological processes generally taken to be pre-lexical. We argue that these and other findings point to interactive processing as a fundamental principle for perception of speech and other modalities.  相似文献   

15.
Abstract

Substantial empirical research has been undertaken on cardiovascular reactivity (CVR). however interpretation of this research is hampered by a lack of theoretical frameworks. This paper develops a framework initially stimulated by evidence demonstrating that the cardiovascular system increases in activity during communication, and that the extent of this activation depends upon numerous and diverse psychosocial factors. We attempt to account for this phenomenon using merit post-structuralist ideas concerning the constructive nature of language and its centrality to an individual's sense of self. Our theoretical framework proposes that the CVR exhibited during language use is explicable in terms of self-construction - From this analysis we hypothesised that CVR would differ across conversations about private self. public self and non-self topics, and that these differences would depend upon people's speaking histories. We found that the blood pressure and heart rate of 102 women was most reactive when they talked in a laboratory with a stranger about aspects of their private self, and least reactive during non-self talk, whilst their heart rate was most reactive during talk about their public self. Overall the results highlight the inextricable link between our inherent socialness and our cardiovascular systems.

SUMMARY

The explanatory scheme outlined here is an attempt to provide a social reconceptualisation of a phenomenon that is typically interpreted in individualistic psychophysiological terms, and which is consistent with the notion that repeated exposure to situations which provoke large haemodynamic changes may lead to CHD disease progression. The explanation draws heavily on post-structuralist ideas regarding language, and the social constructionist notion that engaging in language use is central to constructing and maintaining a sense of self. This sense of self is a central theoretical entity in our everyday lives, produced and maintained in our interactions with others. We argue that it is this centrality of self-construction that helps to explain the extraordinary consistency of elevated CVR in conversation. Further, we have noted the striking parallels between those features of conversations that make the self salient, and those that have been associated with elevated CVR. To examine it more explicitly, it needs to be tested empirically with new data, using explicitly derived operationalisations and hypotheses.  相似文献   

16.
Reaction times to detect a known or unknown digit in paired or single auditory test stimuli were measured. The results suggest that in classification or matching tasks with stimuli belonging to separate verbal classes, parallel or selective processing may be possible. There was no interaction of type of task (classify vs match) with either dichotic vs mixed monaural presentation, or pairs vs single stimuli, or negative vs positive responses. An attempt was made to suggest the separate processing stages underlying performance in this task.  相似文献   

17.
Does bilingualism hamper lexical access in speech production?   总被引:1,自引:0,他引:1  
Ivanova I  Costa A 《Acta psychologica》2008,127(2):277-288
In the present study, we tested the hypothesis that bilingualism may cause a linguistic disadvantage in lexical access even for bilinguals' first and dominant language. To this purpose, we conducted a picture naming experiment comparing the performance of monolinguals and highly-proficient, L1-dominant bilinguals. The results revealed that monolinguals name pictures faster than bilinguals, both when bilinguals perform picture naming in their first and dominant language and when they do so in their weaker second language. This is the first time it has been demonstrated that bilinguals show a naming disadvantage in their L1 in comparison to monolingual speakers.  相似文献   

18.
Most theories of categorization emphasize how continuous perceptual information is mapped to categories. However, equally important are the informational assumptions of a model, the type of information subserving this mapping. This is crucial in speech perception where the signal is variable and context dependent. This study assessed the informational assumptions of several models of speech categorization, in particular, the number of cues that are the basis of categorization and whether these cues represent the input veridically or have undergone compensation. We collected a corpus of 2,880 fricative productions (Jongman, Wayland, & Wong, 2000) spanning many talker and vowel contexts and measured 24 cues for each. A subset was also presented to listeners in an 8AFC phoneme categorization task. We then trained a common classification model based on logistic regression to categorize the fricative from the cue values and manipulated the information in the training set to contrast (a) models based on a small number of invariant cues, (b) models using all cues without compensation, and (c) models in which cues underwent compensation for contextual factors. Compensation was modeled by computing cues relative to expectations (C-CuRE), a new approach to compensation that preserves fine-grained detail in the signal. Only the compensation model achieved a similar accuracy to listeners and showed the same effects of context. Thus, even simple categorization metrics can overcome the variability in speech when sufficient information is available and compensation schemes like C-CuRE are employed.  相似文献   

19.
A selective adaptation experiment was conducted to determine the ability of various adapting stimuli to alter the perception of a series of 13 synthetic speech syllables. The synthetic test syllables, which varied acoustically in the starting frequency and direction of second- and third-formant transitions, included stop consonant distinctions ofplace of articulation for the syllable types [bae], [dae], and [gae]. A systematic adaptation effect was produced in the locus of the bae-bae phonetic boundary for these stimuli after repetitive listening to each of the following adapting syllables: [bae], [phae], [mae], and [vae], indicating that perception ofplace distinctions among the stop consonants can be altered even by repetitive listening to certain speech sounds not belonging to the stop-consonant class.  相似文献   

20.
Abstract Substantial empirical research has been undertaken on cardiovascular reactivity (CVR). however interpretation of this research is hampered by a lack of theoretical frameworks. This paper develops a framework initially stimulated by evidence demonstrating that the cardiovascular system increases in activity during communication, and that the extent of this activation depends upon numerous and diverse psychosocial factors. We attempt to account for this phenomenon using merit post-structuralist ideas concerning the constructive nature of language and its centrality to an individual's sense of self. Our theoretical framework proposes that the CVR exhibited during language use is explicable in terms of self-construction - From this analysis we hypothesised that CVR would differ across conversations about private self. public self and non-self topics, and that these differences would depend upon people's speaking histories. We found that the blood pressure and heart rate of 102 women was most reactive when they talked in a laboratory with a stranger about aspects of their private self, and least reactive during non-self talk, whilst their heart rate was most reactive during talk about their public self. Overall the results highlight the inextricable link between our inherent socialness and our cardiovascular systems. SUMMARY The explanatory scheme outlined here is an attempt to provide a social reconceptualisation of a phenomenon that is typically interpreted in individualistic psychophysiological terms, and which is consistent with the notion that repeated exposure to situations which provoke large haemodynamic changes may lead to CHD disease progression. The explanation draws heavily on post-structuralist ideas regarding language, and the social constructionist notion that engaging in language use is central to constructing and maintaining a sense of self. This sense of self is a central theoretical entity in our everyday lives, produced and maintained in our interactions with others. We argue that it is this centrality of self-construction that helps to explain the extraordinary consistency of elevated CVR in conversation. Further, we have noted the striking parallels between those features of conversations that make the self salient, and those that have been associated with elevated CVR. To examine it more explicitly, it needs to be tested empirically with new data, using explicitly derived operationalisations and hypotheses.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号