首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   52篇
  免费   1篇
  2019年   1篇
  2014年   1篇
  2013年   1篇
  2011年   2篇
  2010年   2篇
  2009年   1篇
  2004年   2篇
  2003年   2篇
  2002年   1篇
  2000年   1篇
  1999年   4篇
  1998年   1篇
  1995年   2篇
  1994年   1篇
  1992年   2篇
  1991年   1篇
  1990年   1篇
  1989年   1篇
  1988年   1篇
  1987年   2篇
  1986年   2篇
  1985年   1篇
  1983年   3篇
  1982年   2篇
  1981年   3篇
  1980年   2篇
  1978年   1篇
  1976年   1篇
  1975年   2篇
  1974年   3篇
  1973年   1篇
  1972年   1篇
  1970年   1篇
排序方式: 共有53条查询结果,搜索用时 15 毫秒
11.
Memory & Cognition - The distinction between categorical and continuous modes of speech perception has played an important role in recent theoretical accounts of the speech perception process....  相似文献   
12.
Recognition memory for consonants and vowels selected from within and between phonetic categories was examined in a delayed comparison discrimination task. Accuracy of discrimination for synthetic vowels selected from both within and between categories was inversely related to the magnitude of the comparison interval. In contrast, discrimination of synthetic stop consonants remained relatively stable both within and between categories. The results indicate that differences in discrimination between consonants and vowels are primarily due to the differential availability of auditory short-term memory for the acoustic cues distinguishing these two classes of speech sounds. The findings provide evidence for distinct auditory and phonetic memory codes in speech perception.  相似文献   
13.
The distinction between auditory and phonetic processes in speech perception was used in the design and analysis of an experiment. Earlier studies had shown that dichotically presented stop consonants are more often identified correctly when they share place of production (e.g., /ba-pa/) or voicing (e.g., /ba-da/) than when neither feature is shared (e.g., /ba-ta/). The present experiment was intended to determine whether the effect has an auditory or a phonetic basis. Increments in performance due to feature-sharing were compared for synthetic stop-vowel syllables in which formant transitions were the sole cues to place of production under two experimental conditions: (1) when the vowel was the same for both syllables in a dichotic pair, as in our earlier studies, and (2) when the vowels differed. Since the increment in performance due to sharing place was not diminished when vowels differed (i.e., when formant transitions did not coincide), it was concluded that the effect has a phonetic rather than an auditory basis. Right ear advantages were also measured and were found to interact with both place of production and vowel conditions. Taken together, the two sets of results suggest that inhibition of the ipsilateral signal in the perception of dichotically presented speech occurs during phonetic analysis.  相似文献   
14.
For many years there has been a consensus that early linguistic experience exerts a profound and often permanent effect on the perceptual abilities underlying the identification and discrimination of stop consonants. It has also been concluded that selective modification of the perception of stop consonants cannot be accomplished easily and quickly in the laboratory with simple discrimination training techniques. In the present article we report the results of three experiments that examined the perception of a three-way voicing contrast by naive monolingual speakers of English. Laboratory training procedures were implemented with a small computer in a real-time environment to examine the perception of voiced, voiceless unaspirated, and voiceless aspirated stops differing in voice onset time. Three perceptual categories were present for most subjects after only a few minutes of exposure to the novel contrast. Subsequent perceptual tests revealed reliable and consistent labeling and categorical-like discrimination functions for all three voicing categories, even though one of the contrasts is not phonologically distinctive in English. The present results demonstrate that the perceptual mechanisms used by adults in categorizing stop consonants can be modified easily with simple laboratory techniques in a short period of time.  相似文献   
15.
Identification of CV syllables was studied in a backward masking paradigm in order to examine two types of interactions observed between dichotically presented speech sounds: the feature sharing effect and the lag effect. Pairs of syllables differed in the consonant, the vowel, and their relative times of onset. Interference between the two dichotic inputs was observed primarily for pairs which contrasted on voicing. Performance on pairs that shared voicing remained excellent under all three conditions. The results suggest that the interference underlying the lag effect and the feature sharing effect for voicing occur before phonetic analysis where both auditory inputs interact.  相似文献   
16.
17.
Thirty-seven profoundly deaf children between 8- and 9-years-old with cochlear implants and a comparison group of normal-hearing children were studied to measure speaking rates, digit spans, and speech timing during digit span recall. The deaf children displayed longer sentence durations and pauses during recall and shorter digit spans compared to the normal-hearing children. Articulation rates, measured from sentence durations, were strongly correlated with immediate memory span in both normal-hearing and deaf children, indicating that both slower subvocal rehearsal and scanning processes may be factors that contribute to the deaf children's shorter digit spans. These findings demonstrate that subvocal verbal rehearsal speed and memory scanning processes are not only dependent on chronological age as suggested in earlier research by. Instead, in this clinical population the absence of early auditory experience and phonological processing activities before implantation appears to produce measurable effects on the working memory processes that rely on verbal rehearsal and serial scanning of phonological information in short-term memory.  相似文献   
18.
Our goal in the present study was to examine how observers identify English and Spanish from visual-only displays of speech. First, we replicated the recent findings of Soto-Faraco et al. (2007) with Spanish and English bilingual and monolingual observers using different languages and a different experimental paradigm (identification). We found that prior linguistic experience affected response bias but not sensitivity (Experiment 1). In two additional experiments, we investigated the visual cues that observers use to complete the languageidentification task. The results of Experiment 2 indicate that some lexical information is available in the visual signal but that it is limited. Acoustic analyses confirmed that our Spanish and English stimuli differed acoustically with respect to linguistic rhythmic categories. In Experiment 3, we tested whether this rhythmic difference could be used by observers to identify the language when the visual stimuli is temporally reversed, thereby eliminating lexical information but retaining rhythmic differences. The participants performed above chance even in the backward condition, suggesting that the rhythmic differences between the two languages may aid language identification in visual-only speech signals. The results of Experiments 3A and 3B also confirm previous findings that increased stimulus length facilitates language identification. Taken together, the results of these three experiments replicate earlier findings and also show that prior linguistic experience, lexical information, rhythmic structure, and utterance length influence visual-only language identification.  相似文献   
19.
Emotion processing impairments are common in patients undergoing brain surgery for fronto-temporal tumour resection, with potential consequences on social interactions. However, evidence is controversial concerning side and site of lesions causing such deficits. This study investigates visual and auditory emotion recognition in brain tumour patients with the aim of clarifying which lesion sites are related to impairments in emotion processing from different modalities. Thirty-four patients were evaluated, before and after surgery, on facial expression and emotional prosody recognition; voxel-based lesion–symptom mapping (VLSM) analyses were performed on patients’ post-surgery MRI images. Results showed that patients’ performance decreased after surgery in both visual and auditory modalities, but, in general, recovered 3 months after surgery. In facial expression recognition, left brain-damaged patients showed greater post-surgery deterioration than right brain-damaged ones, whose performance specifically decreased for sadness and fear. VLSM analysis revealed two segregated areas in the left hemisphere accounting for post-surgery scores for happy (fronto-temporo-insular region) and surprised (middle frontal gyrus and inferior fronto-occipital fasciculus) facial expressions. Our findings demonstrate that surgical removal of tumours in the fronto-temporal region produces impairment in facial emotion recognition with an overall recovery at 3 months, suggesting a partially different representation of positive and negative emotions in the left and right hemispheres for visually – but not auditory – presented emotions; moreover, we show that deficits in specific expression recognition are associated with discrete lesion locations.  相似文献   
20.
It is well known that the formant transitions of stop consonants in CV and VC syllables are roughly the mirror image of each other in time. These formant motions reflect the acoustic correlates of the articulators as they move rapidly into and out of the period of stop closure. Although acoustically different, these formant transitions are correlated perceptually with similar phonetic segments. Earlier research of Klatt and Shattuck (1975) had suggested that mirror image acoustic patterns resembling formant transitions were not perceived as similar. However, mirror image patterns could still have some underlying similarity which might facilitate learning, recognition, and the establishment of perceptual constancy of phonetic segments across syllable positions. This paper reports the results of four experiments designed to study the perceptual similarity of mirror-image acoustic patterns resembling the formant transitions and steady-state segments of the CV and VC syllables /ba/, /da/, /ab/, and /ad/. Using a perceptual learning paradigm, we found that subjects could learn to assign mirror-image acoustic patterns to arbitrary response categories more consistently than they could do so with similar arrangements of the same patterns based on spectrotemporal commonalities. Subjects respond not only to the individual components or dimensions of these acoustic patterns, but also process entire patterns and make use of the patterns’ internal organization in learning to categorize them consistently according to different classification rules.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号