首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
言语想象不仅在大脑预处理机制方面起到重要的作用,还是目前脑机接口领域研究的热点。与正常言语产生过程相比,言语想象的理论模型、激活脑区、神经传导路径等均与其有较多相似之处。而言语障碍群体的言语想象、想象有意义的词语和句子时的脑神经机制与正常言语产生存在差异。鉴于人类言语系统的复杂性,言语想象的神经机制研究还面临一系列挑战,未来研究可在言语想象质量评价工具及神经解码范式、脑控制回路、激活通路、言语障碍群体的言语想象机制、词语和句子想象的脑神经信号等方面进一步探索,为有效提高脑机接口的识别率提供依据,为言语障碍群体的沟通提供便利。  相似文献   

2.
Women are more intolerant of hate speech than men. This study examined relationality measures as mediators of gender differences in the perception of the harm of hate speech and the importance of freedom of speech. Participants were 107 male and 123 female college students. Questionnaires assessed the perceived harm of hate speech, the importance of freedom of speech, empathy, relational and collective interdependence, and connected and separate ways of knowing. Gender differences were found for the harm of hate speech, freedom of speech, empathy, and separate learning as a way of knowing. Women were more negative regarding the harm of hate speech and regarded freedom of speech as less important than men. Additionally, the perceived harm of hate speech was positively associated with empathy, connected knowing, and interdependence, and freedom of speech was positively associated with separate learning and negatively with empathy. Empathy mediated gender differences in the perceived harm of hate speech, and separate learning mediated gender differences in the importance of freedom of speech.  相似文献   

3.
Four experiments are reported investigating previous findings that speech perception interferes with concurrent verbal memory but difficult nonverbal perceptual tasks do not, to any great degree. The forgetting produced by processing noisy speech could not be attributed to task difficulty, since equally difficult nonspeech tasks did not produce forgetting, and the extent of forgetting produced by speech could be manipulated independently of task difficulty. The forgetting could not be attributed to similarity between memory material and speech stimuli, since clear speech, analyzed in a simple and probably acoustically mediated discrimination task, produced little forgetting. The forgetting could not be attributed to a combination of similarity and difficutly since a very easy speech task involving clear speech produced as much forgetting as noisy speech tasks, as long as overt reproduction of the stimuli was required. By assuming that noisy speech and overtly reproduced speech are processed at a phonetic level but that clear, repetitive speech can be processed at a purely acoustic level, the forgetting produced by speech perception could be entirely attributed to the level at which the speech was processed. In a final experiment, results were obtained which suggest that if prior set induces processing of noisy and clear speech at comparable levels, the difference between the effects of noisy speech processing and clear speech processing on concurrent memory is completely eliminated.  相似文献   

4.
The relationship between subjective estimates of the comprehensibility of connected, freerunning speech and rate of speech was investigated for each of two types of time-compressed speech: pitch-varying speeded speech and pitch-normalized compressed speech. The midpoints of the resulting functions approximated the values obtained by a previously described speechrate tracking method. For equivalent degrees of comprehensibility, rates were higher for compressed speech than for speeded speech, indicating that estimates are sensitive to the intelligibility of speech. Subjective estimates of comprehensibility of time-compressed speech provide a means of assessing the itelligibility of connected speech.  相似文献   

5.
Do young infants treat speech as a special signal, compared with structurally similar non‐speech sounds? We presented 2‐ to 7‐month‐old infants with nonsense speech sounds and complex non‐speech analogues. The non‐speech analogues retain many of the spectral and temporal properties of the speech signal, including the pitch contour information which is known to be salient to young listeners, and thus provide a stringent test for a potential listening bias for speech. Our results show that infants as young as 2 months of age listened longer to speech sounds. This listening selectivity indicates that early‐functioning biases direct infants’ attention to speech, granting speech a special status in relation to other sounds.  相似文献   

6.
The authors investigated the effects of changes in horizontal viewing angle on visual and audiovisual speech recognition in 4 experiments, using a talker's face viewed full face, three quarters, and in profile. When only experimental items were shown (Experiments 1 and 2), identification of unimodal visual speech and visual speech influences on congruent and incongruent auditory speech were unaffected by viewing angle changes. However, when experimental items were intermingled with distractor items (Experiments 3 and 4), identification of unimodal visual speech decreased with profile views, whereas visual speech influences on congruent and incongruent auditory speech remained unaffected by viewing angle changes. These findings indicate that audiovisual speech recognition withstands substantial changes in horizontal viewing angle, but explicit identification of visual speech is less robust. Implications of this distinction for understanding the processes underlying visual and audiovisual speech recognition are discussed.  相似文献   

7.
Judgments of offensiveness and accountability of hate speech as a function of contextual factors of the speech and characteristics of the observers were examined. A sample of 212 college students and 53 community participants responded to 12 scenarios describing incidents of hate speech. The within-subject variables manipulated in the scenarios were the target of the speech (ethnic groups, women, and gays), the publicness of the speech, and the behavioral response of the target. Ethnic speech was rated more offensive than gender-or gay-targeted speech; public speech was rated more offensive and more accountable than private speech; and public speech was rated more offensive and accountable when a response occurred and private speech was rated more offensive when a response did not occur. The gender and ethnicity of the raters moderated the effects of the experimental variables, as well as showing main effects. The findings of this study suggest that responses to hate speech are complex and contextual.  相似文献   

8.
Upon hearing an ambiguous speech sound dubbed onto lipread speech, listeners adjust their phonetic categories in accordance with the lipread information (recalibration) that tells what the phoneme should be. Here we used sine wave speech (SWS) to show that this tuning effect occurs if the SWS sounds are perceived as speech, but not if the sounds are perceived as non-speech. In contrast, selective speech adaptation occurred irrespective of whether listeners were in speech or non-speech mode. These results provide new evidence for the distinction between a speech and non-speech processing mode, and they demonstrate that different mechanisms underlie recalibration and selective speech adaptation.  相似文献   

9.
Children’s private speech has been widely studied among children, but it is clear that adults use private speech as well. In this study, illiterate adults’ private speech during a “school-like” task was explored as a function of literacy level and task difficulty in a sample of 126 adults enrolled in a public literacy program. A main effect for literacy level was found—private speech was more internalized and less externalized among adults with higher literacy levels. Externalized private speech was more frequently observed among illiterate adults engaged in the most difficult task. Private speech served cognitive functions as indicated by the proportion of self-regulatory private speech and the proportion of private speech preceding actions being higher in the advanced literacy group and among illiterate adults doing the easier task. Internalized private speech, self-regulatory private speech, and private speech preceding action were each positively correlated with performance and negatively correlated with time to complete the task. The use of private speech in illiterate adults appears to be linked to the mastery of cultural experiences, such as literacy, similar to the self-talk of children.  相似文献   

10.
We present the results of studies designed to measure the segmental intelligibility of eight text-to-speech systems and a natural speech control, using the Modified Rhyme Test (MRT). Results indicated that the voices tested could be grouped into four categories: natural speech, high-quality synthetic speech, moderate-quality synthetic speech, and low-quality synthetic speech. The overall performance of the best synthesis system, DECtalk-Paul, was equivalent to natural speech only in terms of performance on initial consonants. The findings are discussed in terms of recent work investigating the perception of synthetic speech under more severe conditions. Suggestions for future research on improving the quality of synthetic speech are also considered.  相似文献   

11.
Inner speech is typically characterized as either the activation of abstract linguistic representations or a detailed articulatory simulation that lacks only the production of sound. We present a study of the speech errors that occur during the inner recitation of tongue-twister-like phrases. Two forms of inner speech were tested: inner speech without articulatory movements and articulated (mouthed) inner speech. Although mouthing one’s inner speech could reasonably be assumed to require more articulatory planning, prominent theories assume that such planning should not affect the experience of inner speech and, consequently, the errors that are “heard” during its production. The errors occurring in articulated inner speech exhibited the phonemic similarity effect and the lexical bias effect—two speech-error phenomena that, in overt speech, have been localized to an articulatory-feature-processing level and a lexical—phonological level, respectively. In contrast, errors in unarticulated inner speech did not exhibit the phonemic similarity effect—just the lexical bias effect. The results are interpreted as support for a flexible abstraction account of inner speech. This conclusion has ramifications for the embodiment of language and speech and for the theories of speech production.  相似文献   

12.
Few studies have examined connected speech in demented and non-demented patients with Parkinson’s disease (PD). We assessed the speech production of 35 patients with Lewy body spectrum disorder (LBSD), including non-demented PD patients, patients with PD dementia (PDD), and patients with dementia with Lewy bodies (DLB), in a semi-structured narrative speech sample in order to characterize impairments of speech fluency and to determine the factors contributing to reduced speech fluency in these patients. Both demented and non-demented PD patients exhibited reduced speech fluency, characterized by reduced overall speech rate and long pauses between sentences. Reduced speech rate in LBSD correlated with measures of between-utterance pauses, executive functioning, and grammatical comprehension. Regression analyses related non-fluent speech, grammatical difficulty, and executive difficulty to atrophy in frontal brain regions. These findings indicate that multiple factors contribute to slowed speech in LBSD, and this is mediated in part by disease in frontal brain regions.  相似文献   

13.
Traditionally, models of speech comprehension and production do not depend on concepts and processes from the phonological short-term memory (pSTM) literature. Likewise, in working memory research, pSTM is considered to be a language-independent system that facilitates language acquisition rather than speech processing per se. We discuss couplings between pSTM, speech perception and speech production, and we propose that pSTM arises from the cycling of information between two phonological buffers, one involved in speech perception and one in speech production. We discuss the specific role of these processes in speech processing, and argue that models of speech perception and production, and our understanding of their neural bases, will benefit from incorporating them.  相似文献   

14.
Kim J  Sironic A  Davis C 《Perception》2011,40(7):853-862
Seeing the talker improves the intelligibility of speech degraded by noise (a visual speech benefit). Given that talkers exaggerate spoken articulation in noise, this set of two experiments examined whether the visual speech benefit was greater for speech produced in noise than in quiet. We first examined the extent to which spoken articulation was exaggerated in noise by measuring the motion of face markers as four people uttered 10 sentences either in quiet or in babble-speech noise (these renditions were also filmed). The tracking results showed that articulated motion in speech produced in noise was greater than that produced in quiet and was more highly correlated with speech acoustics. Speech intelligibility was tested in a second experiment using a speech-perception-in-noise task under auditory-visual and auditory-only conditions. The results showed that the visual speech benefit was greater for speech recorded in noise than for speech recorded in quiet. Furthermore, the amount of articulatory movement was related to performance on the perception task, indicating that the enhanced gestures made when speaking in noise function to make speech more intelligible.  相似文献   

15.
Sources of variability in children’s language growth   总被引:1,自引:0,他引:1  
The present longitudinal study examines the role of caregiver speech in language development, especially syntactic development, using 47 parent–child pairs of diverse SES background from 14 to 46 months. We assess the diversity (variety) of words and syntactic structures produced by caregivers and children. We use lagged correlations to examine language growth and its relation to caregiver speech. Results show substantial individual differences among children, and indicate that diversity of earlier caregiver speech significantly predicts corresponding diversity in later child speech. For vocabulary, earlier child speech also predicts later caregiver speech, suggesting mutual influence. However, for syntax, earlier child speech does not significantly predict later caregiver speech, suggesting a causal flow from caregiver to child. Finally, demographic factors, notably SES, are related to language growth, and are, at least partially, mediated by differences in caregiver speech, showing the pervasive influence of caregiver speech on language growth.  相似文献   

16.
Perception of visual speech and the influence of visual speech on auditory speech perception is affected by the orientation of a talker's face, but the nature of the visual information underlying this effect has yet to be established. Here, we examine the contributions of visually coarse (configural) and fine (featural) facial movement information to inversion effects in the perception of visual and audiovisual speech. We describe two experiments in which we disrupted perception of fine facial detail by decreasing spatial frequency (blurring) and disrupted perception of coarse configural information by facial inversion. For normal, unblurred talking faces, facial inversion had no influence on visual speech identification or on the effects of congruent or incongruent visual speech movements on perception of auditory speech. However, for blurred faces, facial inversion reduced identification of unimodal visual speech and effects of visual speech on perception of congruent and incongruent auditory speech. These effects were more pronounced for words whose appearance may be defined by fine featural detail. Implications for the nature of inversion effects in visual and audiovisual speech are discussed.  相似文献   

17.
An automated threshold method has been developed for determining the maximum rate of speech understood by individual listeners. Two experiments were undertaken to determine whether the threshold was related to the comprehension of speech or to speech intelligibility. The first experiment compared thresholds of two types of rapid speech reportedly different in intelligibility: simple speeded speech and speech compressed by the sampling method. The second experiment sought to determine the relationship of the threshold to traditional comprehension measures. The results are discussed in terms of the intelligibility and comprehensibility of speech.  相似文献   

18.
This study explored the effect of reading with reversed speech on the frequency of stuttering. Eight adults who stutter served as participants and read four 300-syllable passages while listening to three types of speech stimuli: normal speech (choral reading), reversed speech at normal speed, reversed speech at half speed, and a control condition of no auditory feedback. A repeated-measures analysis of variance showed a significant decrease in stuttering frequency in the choral reading condition but not in reversed speech at normal and half speed. However, the reversed speech at half-speed condition showed a large effect size (omega2 = 0.32). Data suggest that a forward moving speech feedback is not essential to decrease the frequency of stuttering in adults who stutter.  相似文献   

19.
Neural correlates of bimodal speech and gesture comprehension   总被引:2,自引:0,他引:2  
The present study examined the neural correlates of speech and hand gesture comprehension in a naturalistic context. Fifteen participants watched audiovisual segments of speech and gesture while event-related potentials (ERPs) were recorded to the speech. Gesture influenced the ERPs to the speech. Specifically, there was a right-lateralized N400 effect-reflecting semantic integration-when gestures mismatched versus matched the speech. In addition, early sensory components in bilateral occipital and frontal sites differentiated speech accompanied by matching versus non-matching gestures. These results suggest that hand gestures may be integrated with speech at early and late stages of language processing.  相似文献   

20.
Perception of intersensory temporal order is particularly difficult for (continuous) audiovisual speech, as perceivers may find it difficult to notice substantial timing differences between speech sounds and lip movements. Here we tested whether this occurs because audiovisual speech is strongly paired (“unity assumption”). Participants made temporal order judgments (TOJ) and simultaneity judgments (SJ) about sine-wave speech (SWS) replicas of pseudowords and the corresponding video of the face. Listeners in speech and non-speech mode were equally sensitive judging audiovisual temporal order. Yet, using the McGurk effect, we could demonstrate that the sound was more likely integrated with lipread speech if heard as speech than non-speech. Judging temporal order in audiovisual speech is thus unaffected by whether the auditory and visual streams are paired. Conceivably, previously found differences between speech and non-speech stimuli are not due to the putative “special” nature of speech, but rather reflect low-level stimulus differences.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号