首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Two experiments investigated categorical perception (CP) effects for affective facial expressions and linguistic facial expressions from American Sign Language (ASL) for Deaf native signers and hearing non-signers. Facial expressions were presented in isolation (Experiment 1) or in an ASL verb context (Experiment 2). Participants performed ABX discrimination and identification tasks on morphed affective and linguistic facial expression continua. The continua were created by morphing end-point photo exemplars into 11 images, changing linearly from one expression to another in equal steps. For both affective and linguistic expressions, hearing non-signers exhibited better discrimination across category boundaries than within categories for both experiments, thus replicating previous results with affective expressions and demonstrating CP effects for non-canonical facial expressions. Deaf signers, however, showed significant CP effects only for linguistic facial expressions. Subsequent analyses indicated that order of presentation influenced signers’ response time performance for affective facial expressions: viewing linguistic facial expressions first slowed response time for affective facial expressions. We conclude that CP effects for affective facial expressions can be influenced by language experience.  相似文献   

2.
3.
Hierarchical cluster analysis of data from the sorting of noun words was used to compare semantic structures in 63 profoundly deaf and 63 hearing adolescents. In the first study, performance differed only for a set of words referring to sounds, where deaf persons have no experience, and not for a set of common noun words and pictures. In the second study, differences between matched sets of high- and low-imagery words were comparable for 63 deaf and 63 hearing subjects. It is concluded that deaf subjects manifested abstract hierarchical relations and were not dependent on visual mediators or hindered by the absence of acoustic mediators.This investigation was supported in part by National Institutes of Health Research Grant NS-09590-03 from the National Institute of Neurological Diseases and Stroke and in part by a Faculty Research Grant from Bowling Green State University.Portions of Study 1 were previously reported at the Eighty-first Annual Convention of the American Psychological Association, Montreal, 1973.  相似文献   

4.
5.
To examine the processing of sequentially presented letters of familiar and nonsense words, especially among Ss of vastly differing experience on sequential tasks, three groups of Ss were tested on letters of words spelled sequentially on an alphanumeric display and on letters of words fingerspelled. These were a deaf group (N=33) with little or no hearing and who varied in their fingerspelling ability; a staff group (N=12) who taught fingerspelling and were highly proficient; and a hearing group (N=19). Of principal interest was the finding that the hearing Ss did better on nonsense letter recognition, while the deaf group did better on word recognition. Word length was important except to the staff Ss on fingerspelled words, which also suggests that concentration on fingerspelling proficiency forces attention to the whole word and not its component letters. Hearing Ss, who are the group faced with an unfamiliar task, seemed to attend to each letter and hence had more difficulty with recognition of the longer unit.  相似文献   

6.
In a previous study of the comprehension of linguistic prosody in brain-damaged subjects, S. R. Grant and W. O. Dingwall (1984. The role of the right hemisphere in processing linguistic prosody, presentation at the Academy of Aphasia, 1984) demonstrated that the right hemisphere (RH) of nonaphasic patients plays a prominent role in the processing of stress and intonation. The present study examines laterality for affective and linguistic prosody using the dichotic listening paradigm. Both types of prosody elicited a significant left ear advantage. This advantage was more pronounced for affective than for linguistic prosody. These findings strongly support previously documented evidence of RH involvement in the processing of affective prosody (R. G. Ley & M. P. Bryden, 1982. A dissociation of right and left hemispheric effects for recognizing emotional tone and verbal content, Brain and Cognition, 1, 3-9). They also provide support for the previously mentioned demonstration of RH involvement in the processing of linguistic intonation (S. Blumstein & W. E. Cooper, 1974. Hemispheric processing of intonation contours, Cortex, 10, 146-158; Grant & Dingwall, 1984).  相似文献   

7.
Deaf and hearing children were given two tasks: (a) sorting faces portraying nine emotions and (b) matching those faces with drawings of appropriate emotion-arousing situations. The deaf children performed as the hearing children did on the first task but did not match the faces to the situations as well as the hearing children. It appeared that the deaf children were unable to analyze and interpret emotion-arousing events adequately. Possible reasons for this finding are presented and discussed in detail.This research was supported, in part, by Social Rehabilitation Services Grant No. RD-2552.The authors are indebted to Dr. Lloyd Graunke, Delmas Young, and Warren Flower of the Tennessee School for the Deaf, Elizabeth Stallings of the Monroe Harding Children's Home, Rev. Lucius Hart and Rev. Hudlow of the Baptist Children's Home, Dr. Lloyd Funchess and Jerome Freeman of the Louisiana State School for the Deaf, W. W. Wallace, Milton Lillard, and Wilburn Kelley of the Williamson County Tennessee School System, and Charles Barham of Tennessee Preparatory School.  相似文献   

8.
A group of congenitally deaf adults and a group of hearing adults, both fluent in sign language, were tested to determine cerebral lateralization. In the most revealing task, subjects were given a series of trials in which they were fist presented with a videotaped sign and then with a word exposed tachistoscopically to the right visual field or left visual field, and were required to judge whether the word corresponded to the sign or not. The results suggested that the comparison processes involved in the decision were performed more efficiently by the left hemisphere for hearing subjects and by the right hemisphere for deaf subjects. However, the deaf subjects performed as well as the hearing subjects in the left hemisphere, suggesting that the deaf are not impeded by their auditory-speech handicap from developing the left hemisphere for at least some types of linguistic processing.  相似文献   

9.
Unconscious processing of stimuli with emotional content can bias affective judgments. Is this subliminal affective priming merely a transient phenomenon manifested in fleeting perceptual changes, or are long-lasting effects also induced? To address this question, we investigated memory for surprise faces 24 h after they had been shown with 30-ms fearful, happy, or neutral faces. Surprise faces subliminally primed by happy faces were initially rated as more positive, and were later remembered better, than those primed by fearful or neutral faces. Participants likely to have processed primes supraliminally did not respond differentially as a function of expression. These results converge with findings showing memory advantages with happy expressions, though here the expressions were displayed on the face of a different person, perceived subliminally, and not present at test. We conclude that behavioral biases induced by masked emotional expressions are not ephemeral, but rather can last at least 24 h.  相似文献   

10.
The extent to which ability to access linguistic regularities of the orthography is dependent on spoken language was investigated in a two-part spelling test administered to both hearing and profoundly deaf college students. The spelling test examined ability to spell words varying in the degree to which their correct orthographic representation could be derived from the linguistic structure of English. Both groups of subjects were found to be sensitive to the underlying regularities of the orthography as indicated by greater accuracy on linguistically-derivable words than on irregular words. Comparison of accuracy on a production task and on a multiple-choice recognition task showed that the performance of both deaf and hearing subjects benefited from the recognition format, but especially so in the spelling of irregular words. Differences in the underlying spelling process for deaf and hearing spellers were revealed in an analysis of their misspellings: Deaf subjects produced fewer phonetically accurate misspellings than did the hearing subjects. Nonetheless, the deaf spellers tended to observe the formational constraints of English phonology and morphology in their misspellings. Together, these results suggest that deaf subjects are able to develop an appreciation for the structural properties of the orthography, but that their spelling may be guided by an accurate representation of the phonetic structure of words to a lesser degree than it is for hearing spellers.  相似文献   

11.
12.
Of the neurobiological models of children's and adolescents' depression, the neuropsychological one is considered here. Experimental and clinical evidence has allowed us to identify a lateralization of emotional functions from the very beginning of development, and a right hemisphere dominance for emotions is by now well-known. Many studies have also correlated depression with a right hemisphere dysfunction in patients of different ages. The aim of our study was to analyze recognition of different facial emotions by a group of depressed children and adolescents. Patients affected by Major Depressive Disorder recognized less fear in six fundamental emotions than a group of healthy controls, and Dysthymic subjects recognized less anger. The group of patients' failure to recognize negative-aroused facial expressions could indicate a subtle right hemisphere dysfunction in depressed children and adolescents.  相似文献   

13.
The present work represents the first study to investigate the relationship between adult attachment avoidance and anxiety and automatic affective responses to basic facial emotions. Subliminal affective priming methods allowed for the assessment of unconscious affective reactions. An affective priming task using masked sad and happy faces, both of which are approach‐related facial expressions, was administered to 30 healthy volunteers. Participants also completed the Relationship Scales Questionnaire and measures of anxiety and depression. Attachment avoidance was negatively associated with affective priming due to sad (but not happy) facial expressions. This association occurred independently of attachment anxiety, depressivity, and trait anxiety. Attachment anxiety was not correlated with priming due to sad or happy facial expressions. The present results are consistent with the assumption that attachment avoidance moderates automatic affective reaction to sad faces. Our data indicate that avoidant attachment is related to a low automatic affective responsivity to sad facial expressions.  相似文献   

14.
The objectives of this study were to propose a method of presenting dynamic facial expressions to experimental subjects, in order to investigate human perception of avatar's facial expressions of different levels of emotional intensity. The investigation concerned how perception varies according to the strength of facial expression, as well as according to an avatar's gender. To accomplish these goals, we generated a male and a female virtual avatar with five levels of intensity of happiness and anger using a morphing technique. We then recruited 16 normal healthy subjects and measured each subject's emotional reaction by scoring affective arousal and valence after showing them the avatar's face. Through this study, we were able to investigate human perceptual characteristics evoked by male and female avatars' graduated facial expressions of happiness and anger. In addition, we were able to identify that a virtual avatar's facial expression could affect human emotion in different ways according to the avatar's gender and the intensity of its facial expressions. However, we could also see that virtual faces have some limitations because they are not real, so subjects recognized the expressions well, but were not influenced to the same extent. Although a virtual avatar has some limitations in conveying its emotion using facial expressions, this study is significant in that it shows that a new potential exists to use or manipulate emotional intensity by controlling a virtual avatar's facial expression linearly using a morphing technique. Therefore, it is predicted that this technique may be used for assessing emotional characteristics of humans, and may be of particular benefit for work with people with emotional disorders through a presentation of dynamic expression of various emotional intensities.  相似文献   

15.
Discrimination of facial expressions of emotion by depressed subjects   总被引:2,自引:0,他引:2  
A frequent complaint of depressed people concerns their poor interpersonal relationships. Yet, although nonverbal cues are considered of primary importance in interpersonal communication, the major theories of depression focus little attention on nonverbal social perception. The present study investigated the ability of depressed, disturbed control, and normal American adults to make rapid discriminations of facial emotion. We predicted and found that depressed subjects were slower than normal subjects in facial emotion discrimination but were not slower in word category discrimination. These findings suggest that current theories of depression may need to address difficulties with nonverbal information processing. There were also no significant differences between depressed and disturbed control subjects, suggesting that the unique social-behavioral consequences of depression have yet to be identified.  相似文献   

16.
This experiment assessed the effect of different payoff matrices on 6 deaf and 6 hearing subjects on a visual brightness discrimination task. Subjects were required to make forced-choice responses to three different monetary payoff conditions, designed to induce a liberal, a conservative, and an equal-bias response criterion, respectively. The results showed that the deaf did not select the superior response strategies they had exhibited in a previous study (Bross, 1979) on the effect of changes in stimulus probability. Furthermore, the deaf earned significantly less money than the controls for all three conditions, indicating that the introduction of motivational demands affects their response strategies adversely.  相似文献   

17.
A visual hemifield experiment investigated hemispheric specialization among hearing children and adults and prelingually, profoundly deaf youngsters who were exposed intensively to Cued Speech (CS). Of interest was whether deaf CS users, who undergo a development of phonology and grammar of the spoken language similar to that of hearing youngsters, would display similar laterality patterns in the processing of written language. Semantic, rhyme, and visual judgement tasks were used. In the visual task no VF advantage was observed. A RVF (left hemisphere) advantage was obtained for both the deaf and the hearing subjects for the semantic task, supporting Neville's claim that the acquisition of competence in the grammar of language is critical in establishing the specialization of the left hemisphere for language. For the rhyme task, however, a RVF advantage was obtained for the hearing subjects, but not for the deaf ones, suggesting that different neural resources are recruited by deaf and hearing subjects. Hearing the sounds of language may be necessary to develop left lateralised processing of rhymes.  相似文献   

18.
Very few large-scale studies have focused on emotional facial expression recognition (FER) in 3-year-olds, an age of rapid social and language development. We studied FER in 808 healthy 3-year-olds using verbal and nonverbal computerized tasks for four basic emotions (happiness, sadness, anger, and fear). Three-year-olds showed differential performance on the verbal and nonverbal FER tasks, especially with respect to fear. That is to say, fear was one of the most accurately recognized facial expressions as matched nonverbally and the least accurately recognized facial expression as labeled verbally. Sex did not influence emotion-matching nor emotion-labeling performance after adjusting for basic matching or labeling ability. Three-year-olds made systematic errors in emotion-labeling. Namely, happy expressions were often confused with fearful expressions, whereas negative expressions were often confused with other negative expressions. Together, these findings suggest that 3-year-olds' FER skills strongly depend on task specifications. Importantly, fear was the most sensitive facial expression in this regard. Finally, in line with previous studies, we found that recognized emotion categories are initially broad, including emotions of the same valence, as reflected in the nonrandom errors of 3-year-olds.  相似文献   

19.
Recognising emotions from faces that are partly covered is more difficult than from fully visible faces. The focus of the present study is on the role of an Islamic versus non-Islamic context, i.e. Islamic versus non-Islamic headdress in perceiving emotions. We report an experiment that investigates whether briefly presented (40?ms) facial expressions of anger, fear, happiness and sadness are perceived differently when covered by a niqāb or turban, compared to a cap and shawl. In addition, we examined whether oxytocin, a neuropeptide regulating affection, bonding and cooperation between ingroup members and fostering outgroup vigilance and derogation, would differentially impact on emotion recognition from wearers of Islamic versus non-Islamic headdresses. The results first of all show that the recognition of happiness was more accurate when the face was covered by a Western compared to Islamic headdress. Second, participants more often incorrectly assigned sadness to a face covered by an Islamic headdress compared to a cap and shawl. Third, when correctly recognising sadness, they did so faster when the face was covered by an Islamic compared to Western headdress. Fourth, oxytocin did not modulate any of these effects. Implications for theorising about the role of group membership on emotion perception are discussed.  相似文献   

20.
In this study we used an affective priming task to address the issue of whether the processing of emotional facial expressions occurs automatically independent of attention or attentional resources. Participants had to attend to the emotion expression of the prime face, or to a nonemotional feature of the prime face, the glasses. When participants attended to glasses (emotion unattended), they had to report whether the face wore glasses or not (the glasses easy condition) or whether the glasses were rounded or squared (the shape difficult condition). Affective priming, measured on valence decisions on target words, was mainly defined as interference from incongruent rather than facilitation from congruent trials. Significant priming effects were observed just in the emotion and glasses tasks but not in the shape task. When the key–response mapping increased in complexity, taxing working memory load, affective priming effects were reduced equally for the three types of tasks. Thus, attentional load and working memory load affected additively to the observed reduction in affective priming. These results cast some doubts on the automaticity of processing emotional facial expressions.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号