首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This study explores the use of two types of facial expressions, linguistic and affective, in a lateralized recognition accuracy test with hearing and deaf subjects. The linguistic expressions represent unfamiliar facial expression for the hearing subjects whereas they serve as meaningful linguistic emblems for deaf signers. Hearing subjects showed left visual field advantages for both types of signals while deaf subjects' visual field asymmetries were greatly influenced by the order of presentation. The results suggest that for hearing persons, the right hemisphere may predominate in the recognition of all forms of facial expression. For deaf signers, hemispheric specialization for the processing of facial signals may be influenced by the differences these signals serve in this population. The use of noncanonical facial signals in laterality paradigms is encouraged as it provides an additional avenue of exploration into the underlying determinants of hemispheric specialization for recognition of facial expression.  相似文献   

2.
To assess domain specificity of experience-dependent pitch representation we evaluated the mismatch negativity (MMN) and discrimination judgments of English musicians, English nonmusicians, and native Chinese for pitch contours presented in a nonspeech context using a passive oddball paradigm. Stimuli consisted of homologues of Mandarin high rising (T2) and high level (T1) tones, and a linear rising ramp (T2L). One condition involved a between-category contrast (T1/T2), the other, a within-category contrast (T2L/T2). Irrespective of condition, musicians and Chinese showed larger MMN responses than nonmusicians; Chinese larger than musicians. Chinese, however, were less accurate than nonnatives in overt discrimination of T2L and T2. Taken together, these findings suggest that experience-dependent effects to pitch contours are domain-general and not driven by linguistic categories. Yet specific differences in long-term experience in pitch processing between domains (music vs. language) may lead to gradations in cortical plasticity to pitch contours.  相似文献   

3.
The nature of hemispheric processing in the prelingually deaf was examined in a picture-letter matching task. It was hypothesized that linguistic competence in the deaf would be associated with normal or near-normal laterality (i.e., a left hemisphere advantage for analytic linguistic tasks). Subjects were shown a simple picture of a common object (e.g., lamp), followed by brief unilateral presentation of a manually signed or orthographic letter, and they had to indicate as quickly as possible whether the letter was present in the spelling of the object's label. While hearing subjects showed a marked left hemisphere advantage, no such superiority was found for either a linguistically skilled or unskilled group of deaf students. In the skilled group, however, there was a suggestion of a right hemisphere advantage for manually signed letters. It was concluded that while hemispheric asymmetry of function does not develop normally in the deaf, the absence of this normal pattern does not preclude the development of the analytic skills needed to deal with the structure of language.  相似文献   

4.
Binocular mixtures of equiluminous components of different wavelengths were matched with additive monoptic mixtures of the same components. After a satisfactory match had been achieved, the luminance of each colour in the monoptic mixture was measured photometrically. After presentation of an orthogonal grating superimposed on the colour shown to one eye, the colour matching was repeated. The grating induced a strong dominance of the colour with which it was combined. Yet the uncontoured colour was not entirely suppressed, but contributed to the binocular colour to various degrees. In three subjects with anisometropic amblyopia in one eye the colour presented to the amblyopic eye contributed little or nothing to the haploscopic colour mixture, depending on the degree of amblyopia. This diminished contribution could not be enhanced by a grid. In three cases with strabismic amblyopia and in one case with strabismus alternans no haploscopic colour mixture effects could be demonstrated. The observations are discussed in the context of neurophysiological findings in the visual system of primates, and it is suggested that colour and contour are not transmitted through independent channels from the retina to the cortex.  相似文献   

5.
Morphosyntactic capacities of normal brain hemispheres were compared in lexical decision studies involving centrally and laterally presented Serbo-Croatian nouns in different cases. Cases are distinguished by different suffixes and syntactic roles. Experiment 1 confirmed and extended previous findings of the nominative superiority effect: words in the nominative case were processed faster and more accurately than words in other three cases, and nonwords in the nominative case led to more false positive reactions than nonwords in other cases. In Experiment 2 this effect was replicated for right visual field stimuli: nominatives had faster reaction times and smaller error rates than accusatives, and the reversed pattern was found for nonwords. For left visual field stimuli, only the word error analysis found the nominative superior, while the other three analyses (word reaction times, nonword reaction times, and nonword error rates) showed no significant case effect. Word familiarity had an equally strong effect in both hemispheres. The results suggest that centrally presented stimuli are processed by the left hemisphere, that laterally presented stimuli are processed by the initially receiving hemisphere, and that the right hemisphere has a frequency-sensitive lexicon. Reduced right-hemisphere sensitivity for case differences may be due to different lexicon structure or the absence of appropriate morphological or syntactic mechanisms.  相似文献   

6.
The “McGurk effect” demonstrates that visual (lip-read) information is used during speech perception even when it is discrepant with auditory information. While this has been established as a robust effect in subjects from Western cultures, our own earlier results had suggested that Japanese subjects use visual information much less than American subjects do (Sekiyama & Tohkura, 1993). The present study examined whether Chinese subjects would also show a reduced McGurk effect due to their cultural similarities with the Japanese. The subjects were 14 native speakers of Chinese living in Japan. Stimuli consisted of 10 syllables (/ba/, /pa/, /ma/, /wa/, /da/, /ta/, /na/, /ga/, /ka/, /ra/ ) pronounced by two speakers, one Japanese and one American. Each auditory syllable was dubbed onto every visual syllable within one speaker, resulting in 100 audiovisual stimuli in each language. The subjects’ main task was to report what they thought they had heard while looking at and listening to the speaker while the stimuli were being uttered. Compared with previous results obtained with American subjects, the Chinese subjects showed a weaker McGurk effect. The results also showed that the magnitude of the McGurk effect depends on the length of time the Chinese subjects had lived in Japan. Factors that foster and alter the Chinese subjects’ reliance on auditory information are discussed.  相似文献   

7.
Two experiments investigated categorical perception (CP) effects for affective facial expressions and linguistic facial expressions from American Sign Language (ASL) for Deaf native signers and hearing non-signers. Facial expressions were presented in isolation (Experiment 1) or in an ASL verb context (Experiment 2). Participants performed ABX discrimination and identification tasks on morphed affective and linguistic facial expression continua. The continua were created by morphing end-point photo exemplars into 11 images, changing linearly from one expression to another in equal steps. For both affective and linguistic expressions, hearing non-signers exhibited better discrimination across category boundaries than within categories for both experiments, thus replicating previous results with affective expressions and demonstrating CP effects for non-canonical facial expressions. Deaf signers, however, showed significant CP effects only for linguistic facial expressions. Subsequent analyses indicated that order of presentation influenced signers’ response time performance for affective facial expressions: viewing linguistic facial expressions first slowed response time for affective facial expressions. We conclude that CP effects for affective facial expressions can be influenced by language experience.  相似文献   

8.
J Polich 《Acta psychologica》1986,61(2):137-151
A visual detection paradigm was employed in two experiments to explore the relationship between stimulus configuration and hemispheric processing. Stimulus arrays composed of small vertical lines were presented tachistoscopically to the left and right hemispheres. Subjects judged whether all the lines within an array were vertically oriented (same) or whether a horizontal line was present (different). In general, right hemisphere presentations demonstrated a faster detection than the left for arrays consisting of all identical elements, whereas left hemisphere presentations demonstrated a faster detection than the right for arrays containing a different element. Similar results were obtained for both experiments, although the response patterns and error rates varied somewhat with the inter-item spacing and organization of stimulus elements. These effects suggest that differential hemispheric processing for target detection tasks is determined by the nature of the stimulus materials which governs the relative efficacy of the hemispheric and decision outcomes.  相似文献   

9.
The role of the left cerebral hemisphere for the discrimination of duration was examined in a group of normal subjects. Two tasks were presented: the first required a reaction-time response to the offset of monaural pulse sequences varying in interpulse duration, and the second required the discrimination of small differences in durations, within a delayed-comparison paradigm. In each task a right-ear advantage was obtained when the durations were 50 msec or less. No ear advantage was obtained for the larger durations of 67 to 120 msec. Since the perceptual distinctiveness of phonemes may be provided by durations approximating 50 msec, the nature of the relationship between the left hemisphere's role in temporal processing and speech processing may be elaborated.  相似文献   

10.
11.
Two experiments tested how facial details are used in recognizing face drawings presented to either the left or right visual field (VF). Subjects used inner and outer features about equally in both the left and right VFs. The major finding was a very strong tendency to recognize the upper facial features more accurately than the lower facial features. The top-to-bottom recognition difference occurred in both VFs, in contrast to an earlier study by J. Sergent (1982, Journal of Experimental Psychology: Human Perception and Performance, 8, 1-14). Methodological differences between the present experiments and Sergent's studies were discussed. It was concluded that both the left and right hemispheres recognize novel faces using top-to-bottom serial processing.  相似文献   

12.
In the present work, we developed a database of nonlinguistic sounds that mirror prosodic characteristics typical of language and thus carry affective information, but do not convey linguistic information. In a dichotic-listening task, we used these novel stimuli as a means of disambiguating the relative contributions of linguistic and affective processing across the hemispheres. This method was applied to both children and adults with the goal of investigating the role of developing cognitive resource capacity on affective processing. Results suggest that children's limited computational resources influence how they process affective information and rule out attentional biases as a factor in children's perceptual asymmetries for nonlinguistic affective sounds. These data further suggest that investigation of perception of nonlinguistic affective sounds is a valuable tool in assessing interhemispheric asymmetries in affective processing, especially in parceling out linguistic contributions to hemispheric differences.  相似文献   

13.
Hemispheric asymmetry was examined for native English speakers identifying consonant-vowel-consonant (CVC) non-words presented in standard printed form, in standard handwritten cursive form or in handwritten cursive with the letters separated by small gaps. For all three conditions, fewer errors occurred when stimuli were presented to the right visual field/left hemisphere (RVF/LH) than to the left visual field/right hemisphere (LVF/RH) and qualitative error patterns indicated that the last letter was missed more often than the first letter on LVF/RH trials but not on RVF/LH trials. Despite this overall similarity, the RVF/LH advantage was smaller for both types of cursive stimuli than for printed stimuli. In addition, the difference between first-letter and last-letter errors was smaller for handwritten cursive than for printed text, especially on LVF/RH trials. These results suggest a greater contribution of the right hemisphere to the identification of handwritten cursive, which is likely related visual complexity and to qualitative differences in the processing of cursive versus print.  相似文献   

14.
The dichotomies verbal/visuospatial, serial/parallel and analytic/holistic are reviewed with respect to differences in hemispheric processing. A number of experimental parameters may be varied in such tasks, and together with certain frequently-occurring weaknesses of experimental design may account for the often discrepant results hitherto reported. The above factors are systematically reviewed, and three further experiments are reported which attempt to fill in the missing designs. Further evidence is given in support of the hypothesis that right-hemisphere superiority is most apparent in processes leading to identity matching. It is quantitative rather than qualitative, and may depend upon operations on the entire gestalt, such as holistic matching, mental rotation, reflection, distortion, etc., rather than, e.g., simultaneous (parallel) processing of discretely analysed or isolated features or elements. On the other hand left-hemisphere involvement in visuospatial processing is thought to reflect analysis of the configuration into its separable components; such processing may be either serial or parallel, and may frequently lead to a judgement different.  相似文献   

15.
The present report investigates the effect of various cues to phrase structure upon the hemispherically lateralized processing of phonetic structure. Meaningless sequences were paired for dichotic presentation and were delivered under two different conditions termed structured and semistructured. The dichotic sequences in the two conditions contained the same nonsense syllable stems, English bound morphemes, and English function words. Also each of the sequences in both conditions were grammatically ordered in the sense that if the nonsense stems were replaced by English stems, a grammatical sentence would result. The conditions differed with respect to prosody, however: the structured sequences were characterized by the acoustic correlates of constituent structure; the semistructured sequences were delivered in a monotone. A significant right-ear superiority was observed in the structured condition, but not in the semistructured condition. These perceptual laterality differences are discussed in relation to cerebral dominance for language and in relation to speech processing generally.  相似文献   

16.
17.
Visual field differences for the recognition of emotional expression were investigated using a tachistoscopic procedure. Cartoon line drawings of five adult male characters, each with five emotional expressions ranging from extremely positive to extremely negative, were used as stimuli. Single stimuli were presented unilaterally for 85 msec. Subjects (N = 20) were asked to compare this target face to a subsequent centrally presented face and to decide whether the emotional expressions of the two faces, or the character represented by the two faces, were the same or different. Significant left visual field (LVF) superiorities for both character and emotional expression recognition were found. Subsequent analyses demonstrated the independence of these effects. The LVF superiority for emotional judgments was related to the degree of affective expression, but that for character recognition was not. The results of this experiment are consistent with experimental and clinical literature which has indicated a right hemispheric superiority for face recognition and for processing emotional stimuli. The asymmetry for emotion recognition is interpreted as being an expression of the right hemisphere's synthetic and integrative characteristics, its holistic nature, and its use of imagic associations.  相似文献   

18.
Abstract

The performance of agoraphobic and normal subjects is compared across three different types of task (Kamin blocking effect, incidental learning, and choice reaction time) all designed to tap processing of neutral stimuli. Agoraphobics differed from normals on the Kamin blocking task and on one of the two incidental learning measures employed. Choice reaction time performance was the same in both groups. The relevance of these findings for future studies of emotional processing in such subjects is discussed.  相似文献   

19.
20.
Hemispheric differences for orthographic and phonological processing   总被引:2,自引:3,他引:2  
The role of hemispheric differences for the encoding of words was assessed by requiring subjects to match tachistoscopically presented word pairs on the basis of their rhyming or visual similarity. The interference between a word pair's orthography and phonology produced matching errors which were differentially affected by the visual field/hemisphere of projection and sex of subject. In general, right visual field/left hemisphere presentations yielded fewer errors when word pairs shared similar phonology under rhyme matching and similar orthography under visual matching. Left visual field/right hemisphere presentations yielded fewer errors when word pairs were phonologically dissimilar under rhyme matching and orthographically dissimilar under visual matching. Males made more errors and demonstrated substantially stronger hemispheric effects than females. These patterns suggested visual field/hemispheric differences for orthographic and phonological encoding occurred during the initial stages of word processing and were more pronounced for male compared to female subjects.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号