首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 7 毫秒
1.
2.
Phonological encoding in the silent speech of persons who stutter   总被引:1,自引:0,他引:1  
The purpose of the present study was to investigate the role of phonological encoding in the silent speech of persons who stutter (PWS) and persons who do not stutter (PNS). Participants were 10 PWS (M=30.4 years, S.D.=7.8), matched in age, gender, and handedness with 11 PNS (M=30.1 years, S.D.=7.8). Each participant performed five tasks: a familiarization task, an overt picture naming task, a task of self-monitoring target phonemes during concurrent silent picture naming, a task of monitoring target pure tones in aurally presented tonal sequences, and a simple motor task requiring finger button clicks in response to an auditory tone. Results indicated that PWS were significantly slower in phoneme monitoring compared to PNS. No significant between-group differences were present for response speed during the auditory monitoring, picture naming or simple motor tasks, nor did the two groups differ for percent errors in any of the experimental tasks. The findings were interpreted to suggest a specific deficiency at the level of phonological monitoring, rather than a general monitoring, reaction time or auditory monitoring deficit in PWS. Educational objectives: As a result of this activity, the participant should: (1) identify and assess the literature on phonological encoding skills in PWS, (2) enumerate and evaluate some major psycholinguistic theories of stuttering, and (3) describe the mechanism by which defective phonological encoding can disrupt fluent speech production.  相似文献   

3.
This study was designed to understand which factors influence consumer hesitation or delay in online product purchases. The study examined four groups of variables (i.e., consumer characteristics, contextual factors perceived uncertainty factors, and medium/channel innovation factors) that predict three types of online shopping hesitation (i.e., overall hesitation, shopping cart abandonment, and hesitation at the final payment stage). We found that different sets of delay factors are related to different aspects of online shopping hesitation. The study concludes with suggestion for various delay-reduction devices to help consumers close their online decision hesitation.  相似文献   

4.
5.
In this study, we introduce pause detection (PD) as a new tool for studying the on-line integration of lexical and semantic information during speech comprehension. When listeners were asked to detect 200-ms pauses inserted into the last words of spoken sentences, their detection latencies were influenced by the lexical-semantic information provided by the sentences. Listeners took longer to detect a pause when it was inserted within a word that had multiple potential endings, rather than a unique ending, in the context of the sentence. An event-related potential (ERP) variant of the PD procedure revealed brain correlates of pauses as early as 101 to 125 ms following pause onset and patterns of lexical-semantic integration that mirrored those obtained with PD within 160 ms of pause onset. Thus, both the behavioral and the electrophysiological responses to pauses suggest that lexical and semantic processes are highly interactive and that their integration occurs rapidly during speech comprehension.  相似文献   

6.
We examined whether the orientation of the face influences speech perception in face-to-face communication. Participants identified auditory syllables, visible syllables, and bimodal syllables presented in an expanded factorial design. The syllables were /ba/, /va/, /δa/, or /da/. The auditory syllables were taken from natural speech whereas the visible syllables were produced by computer animation of a realistic talking face. The animated face was presented either as viewed in normal upright orientation or inverted orientation (180° frontal rotation). The central intent of the study was to determine if an inverted view of the face would change the nature of processing bimodal speech or simply influence the information available in visible speech. The results with both the upright and inverted face views were adequately described by the fuzzy logical model of perception (FLMP). The observed differences in the FLMP’s parameter values corresponding to the visual information indicate that inverting the view of the face influences the amount of visible information but does not change the nature of the information processing in bimodal speech perception  相似文献   

7.
8.
Our goal in the present study was to examine how observers identify English and Spanish from visual-only displays of speech. First, we replicated the recent findings of Soto-Faraco et al. (2007) with Spanish and English bilingual and monolingual observers using different languages and a different experimental paradigm (identification). We found that prior linguistic experience affected response bias but not sensitivity (Experiment 1). In two additional experiments, we investigated the visual cues that observers use to complete the languageidentification task. The results of Experiment 2 indicate that some lexical information is available in the visual signal but that it is limited. Acoustic analyses confirmed that our Spanish and English stimuli differed acoustically with respect to linguistic rhythmic categories. In Experiment 3, we tested whether this rhythmic difference could be used by observers to identify the language when the visual stimuli is temporally reversed, thereby eliminating lexical information but retaining rhythmic differences. The participants performed above chance even in the backward condition, suggesting that the rhythmic differences between the two languages may aid language identification in visual-only speech signals. The results of Experiments 3A and 3B also confirm previous findings that increased stimulus length facilitates language identification. Taken together, the results of these three experiments replicate earlier findings and also show that prior linguistic experience, lexical information, rhythmic structure, and utterance length influence visual-only language identification.  相似文献   

9.
This is a psycholinguistic study of glossolalia produced by four speakers in an experimental setting. Acoustical patterns (signal waveform, fundamental frequency, and amplitude changes) were compared. The frequency of occurrence of vowels and consonants was computed for the glossolalic samples and compared with General American English. The results showed that three of the four speakers had substantially higher vowel-to-consonant ratios than are found in English speech. Phonology, morphology, and syntax of the four glossolalic productions were analyzed. This revealed two distinct forms of glossolalia. One form, which we called formulaic tends towards stereotypy and repetitiousness. The second form, which we called innovative shows more novelty and unpredictability in the chaining of speech-like elements. These contrastive forms of glossolalia may relate to dimensions of linguistic creativity. Precise correlates with personality patterns, educational backgrounds, psychopathology, and other sociolinguistic variables remain to be employed.  相似文献   

10.
We tested listeners’ ability to identify brief excerpts from popular recordings. Listeners were required to match 200- or 100-msec excerpts with the song titles and artists. Performance was well above chance levels for 200-msec excerpts and poorer but still better than chance for 100-msec excerpts. Performance fell to chance levels when dynamic (time-varying) information was disrupted by playing the 100-msec excerpts backward and when high-frequency information was omitted from the 100-msec excerpts; performance was unaffected by the removal of low-frequency information. In sum, successful identification required the presence of dynamic, high-frequency spectral information.  相似文献   

11.
12.
13.
Although video offers many advantages for recording human eye orientation, it involves such low temporal resolution (60 Hz) that it seems an unpromising method for evaluating the dynamics of rapid (saccadic) eye movements. This study demonstrates, nevertheless, that such measurements can provide surprisingly reliable estimates of the peak velocity of larger saccades. Simulations of 60-Hz sampling of eye position during idealized saccades provided replicated estimates of “apparent peak velocity.” The results indicate that when saccadic amplitude is about 10° or larger, estimates of peak velocity would on average be biased downward by less than 10%, with standard deviations due to measurement timing of less than 5%. Experimental data (from recordings of 10° and 20° saccades with customized video) demonstrate that these theoretical sources of uncertainty are considerably smaller than the trialto- trial variability in performance of real saccades. Reliability of video recording, however, rapidly deteriorates when saccades become smaller than about 10°.  相似文献   

14.
Two eye movement experiments examined whether skilled readers include vowels in the early phonological representations used in word recognition during silent reading. Target words were presented in sentences preceded by parafoveal previews in which the vowel phoneme was concordant or discordant with the vowel phoneme in the target word. In Experiment 1, the orthographic vowel differed from the target in both the concordant and discordant preview conditions. In Experiment 2, the vowel letters in the preview were identical to those in the target word. The phonological vowel was ambiguous, however, and the final consonants of the previews biased the vowel phoneme either toward or away from the target's vowel phoneme. In both experiments, shorter reading times were observed for targets preceded by concordant previews than by discordant previews. Implications for models of word recognition are discussed.  相似文献   

15.
16.
17.
18.
Deviation of real speech from grammatical ideals due to disfluency and other speech errors presents potentially serious problems for the language learner. While infants may initially benefit from attending primarily or solely to infant-directed speech, which contains few grammatical errors, older infants may listen more to adult-directed speech. In a first experiment, Post-verbal infants preferred fluent speech to disfluent speech, while Pre-verbal infants showed no preference. In a second experiment, Post-verbal infants discriminated disfluent and fluent speech even when lexical information was removed, showing that they make use of prosodic properties of the speech stream to detect disfluency. Because disfluencies are highly correlated with grammatical errors, this sensitivity provides infants with a means of filtering ungrammaticality from their input.  相似文献   

19.
20.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号