首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
To test the hypothesis that the magnitude of sex differences in simple visual reaction time (RT) has narrowed across time, a meta-analysis was conducted on 72 effect sizes derived from 21 studies (n=15,003) published over a 73-year period. The analysis provided strong evidence for the hypothesized change. In addition, the analysis indicated that the sex difference in RT was on average smaller with non-U.S. samples than with U.S. samples. No relation was found between the magnitude of the sex difference in RT and age or presence vs. absence of a warning signal. Two factors–-participation in fast-action sports and driving–-are proposed as having been responsible for the decrease in the magnitude of the sex differences in simple visual RT across time.  相似文献   

2.
返回抑制(inhibition of return, IOR)与情绪刺激都具有引导注意偏向、提高搜索效率的特点, 但二者间是否存在一定的交互作用迄今为止尚不明确。研究采用“线索-目标”范式并在视听双通道呈现情绪刺激来考察情绪刺激的加工与IOR的交互作用。实验1中情绪刺激以单通道视觉面孔或一致的视听双通道呈现, 实验2通过在视听通道呈现不一致的情绪刺激进一步考察视听双通道情绪一致刺激对IOR的影响是否是由听觉通道一致的情绪刺激导致的, 即是否对听觉通道的情绪刺激进行了加工。结果发现, 视听双通道情绪一致刺激能够削弱IOR, 但情绪不一致刺激与IOR之间不存在交互作用, 并且单双通道的IOR不存在显著差异。研究结果表明仅在视听双通道呈现情绪一致刺激时, 才会影响同一阶段的IOR, 这进一步支持了IOR的知觉抑制理论。  相似文献   

3.
Recognising identity and emotion conveyed by the face is important for successful social interactions and has thus been the focus of considerable research. Debate has surrounded the extent to which the mechanisms underpinning face emotion and face identity recognition are distinct or share common processes. Here we use an individual differences approach to address this issue. In a well-powered (N?=?605) and age-diverse sample we used structural equation modelling to assess the association between face emotion recognition and face identity recognition ability. We also sought to assess whether this association (if present) reflected visual short-term memory and/or general intelligence (g). We observed a strong positive correlation (r?=?.52) between face emotion recognition ability and face identity recognition ability. This association was reduced in magnitude but still moderate in size (r?=?.28) and highly significant when controlling for measures of g and visual short-term memory. These results indicate that face emotion and face identity recognition abilities in part share a common processing mechanism. We suggest that face processing ability involves multiple functional components and that modelling the sources of individual differences can offer an important perspective on the relationship between these components.  相似文献   

4.
为探讨基于视听双通道的音乐情绪冲突效应、冲突情境下的优势加工通道和音乐经验对结果的影响,本研究采用音乐表演视频为材料,比较音乐组和非音乐组被试在一致型和不一致型视听双通道下的情绪评定速度、准确性及强度。结果发现:(1)一致型条件下的情绪评定更准确且更强烈;(2)不一致型条件下,被试更多以听觉通道的情绪线索为依据进行情绪类型评定;(3)非音乐组被试比音乐组被试更依赖视觉通道的情绪线索。结果表明:通道间情绪信息的不一致阻碍了音乐情绪加工; 听觉通道是音乐情绪冲突情境下的优势加工通道; 音乐经验降低了情绪冲突效应对音乐组被试的干扰。  相似文献   

5.
为了探讨视听双通道下的音乐情绪加工机制及音乐情绪类型和音乐训练背景对加工机制的影响,本研究采用表达开心和悲伤的音乐表演视频为材料,比较音乐组被试和非音乐组被试在单听觉通道、单视觉通道和视听双通道三种情境下的情绪评定速度、正确率和强度。结果发现:1)视听双通道与单视觉通道差异显著,与单听觉通道差异不显著。2)非音乐组被试对悲伤的评定正确率高于音乐组被试,对开心的评定正确率低于音乐组被试。说明音乐情绪加工的视听双通道整合优势仅相对单视觉通道存在;非音乐组被试对视觉通道情绪信息的变化更敏感,音乐组被试更依赖音乐经验;可在音乐表演时加入协调的视觉通道情绪信息帮助没有音乐训练经验的听赏者。  相似文献   

6.
Although moderate to severe traumatic brain injury (TBI) leads to facial affect recognition impairments in up to 39% of individuals, protective and risk factors for these deficits are unknown. The aim of the current study was to examine the effect of sex on emotion recognition abilities following TBI. We administered two separate emotion recognition tests (one static and one dynamic) to 53 individuals with moderate to severe TBI (females = 28) and 49 demographically matched comparisons (females = 22). We then investigated the presence of a sex-by-group interaction in emotion recognition accuracy. In the comparison group, there were no sex differences. In the TBI group, however, females significantly outperformed males in the dynamic (but not the static) task. Moreover, males (but not females) with TBI performed significantly worse than comparison participants in the dynamic task. Further analysis revealed that sex differences in emotion recognition abilities within the TBI group could not be explained by lesion location, TBI severity, or other neuropsychological variables. These findings suggest that sex may serve as a protective factor for social impairment following TBI and inform clinicians working with TBI as well as research on the neurophysiological correlates of sex differences in social functioning.  相似文献   

7.
This study examined the effect of sense modality (auditory/visual) on emotional dampening (reduced responsiveness to emotions with elevation in blood pressure). Fifty‐six normotensive participants were assessed on tasks requiring labelling and matching of emotions in faces and voices. Based on median split of systolic and diastolic blood pressures (SBP and DBP, respectively), participants were divided into low BP, high BP and isolated BP groups. On emotion‐labelling tasks, analysis revealed reduced emotion recognition in the high BP than the low BP group. On emotion‐matching tasks, reduced emotion recognition was noted in high and also isolated BP group as compared to low BP group for the task that required matching a visual target with one of the four auditory distractors. Our findings show for the first time that even isolated elevations in either SBP or DBP may result in emotional dampening. Furthermore, the study highlights that the emotional dampening effect generalises to explicit processing (labelling) of emotional information in both faces and voices—and that these effects tentatively occur during more pragmatic and covert (matching) emotion recognition processes too. These findings require replication in clinical hypertensives.  相似文献   

8.
The experiment investigated how the addition of emotion information from the voice affects the identification of facial emotion. We presented whole face, upper face, and lower face displays and examined correct recognition rates and patterns of response confusions for auditory-visual (AV), auditory-only (AO), and visual-only (VO) expressive speech. Emotion recognition accuracy was superior for AV compared to unimodal presentation. The pattern of response confusions differed across the unimodal conditions and across display type. For AV presentation, a response confusion only occurred when such a confusion was present in each modality separately, thus response confusions were reduced compared to unimodal presentations. Emotion space (calculated from the confusion data) differed across display types for the VO presentations but was more similar for the AV ones indicating that the addition of the auditory information acted to harmonize the various VO response patterns. These results are discussed with respect to how bimodal emotion recognition combines auditory and visual information.  相似文献   

9.
Sensitivity to facial and vocal emotion is fundamental to children's social competence. Previous research has focused on children's facial emotion recognition, and few studies have investigated non‐linguistic vocal emotion processing in childhood. We compared facial and vocal emotion recognition and processing biases in 4‐ to 11‐year‐olds and adults. Eighty‐eight 4‐ to 11‐year‐olds and 21 adults participated. Participants viewed/listened to faces and voices (angry, happy, and sad) at three intensity levels (50%, 75%, and 100%). Non‐linguistic tones were used. For each modality, participants completed an emotion identification task. Accuracy and bias for each emotion and modality were compared across 4‐ to 5‐, 6‐ to 9‐ and 10‐ to 11‐year‐olds and adults. The results showed that children's emotion recognition improved with age; preschoolers were less accurate than other groups. Facial emotion recognition reached adult levels by 11 years, whereas vocal emotion recognition continued to develop in late childhood. Response bias decreased with age. For both modalities, sadness recognition was delayed across development relative to anger and happiness. The results demonstrate that developmental trajectories of emotion processing differ as a function of emotion type and stimulus modality. In addition, vocal emotion processing showed a more protracted developmental trajectory, compared to facial emotion processing. The results have important implications for programmes aiming to improve children's socio‐emotional competence.  相似文献   

10.
Emotions can be recognized whether conveyed by facial expressions, linguistic cues (semantics), or prosody (voice tone). However, few studies have empirically documented the extent to which multi-modal emotion perception differs from uni-modal emotion perception. Here, we tested whether emotion recognition is more accurate for multi-modal stimuli by presenting stimuli with different combinations of facial, semantic, and prosodic cues. Participants judged the emotion conveyed by short utterances in six channel conditions. Results indicated that emotion recognition is significantly better in response to multi-modal versus uni-modal stimuli. When stimuli contained only one emotional channel, recognition tended to be higher in the visual modality (i.e., facial expressions, semantic information conveyed by text) than in the auditory modality (prosody), although this pattern was not uniform across emotion categories. The advantage for multi-modal recognition may reflect the automatic integration of congruent emotional information across channels which enhances the accessibility of emotion-related knowledge in memory.  相似文献   

11.
Results from studies on gender differences in emotion recognition vary, depending on the types of emotion and the sensory modalities used for stimulus presentation. This makes comparability between different studies problematic. This study investigated emotion recognition of healthy participants (N = 84; 40 males; ages 20 to 70 years), using dynamic stimuli, displayed by two genders in three different sensory modalities (auditory, visual, audio-visual) and five emotional categories. The participants were asked to categorise the stimuli on the basis of their nonverbal emotional content (happy, alluring, neutral, angry, and disgusted). Hit rates and category selection biases were analysed. Women were found to be more accurate in recognition of emotional prosody. This effect was partially mediated by hearing loss for the frequency of 8,000 Hz. Moreover, there was a gender-specific selection bias for alluring stimuli: Men, as compared to women, chose “alluring” more often when a stimulus was presented by a woman as compared to a man.  相似文献   

12.
白鹭  毛伟宾  王蕊  张文海 《心理学报》2017,(9):1172-1183
本研究以消极情绪间感知相似性较低的厌恶、恐惧面孔表情为材料,提供5个情绪性语言标签减少文字背景对面孔识别的促进作用,通过2个实验对自然场景以及身体动作对面孔表情识别的影响进行了研究,旨在考察面孔表情与自然场景间的情绪一致性对情绪面孔识别和自然场景加工的影响,以及加入与自然场景情绪相冲突的身体动作对面孔表情识别可能产生的影响。研究结果表明:(1)尽管增加了情绪性语言标签选项数量,自然场景的情绪对面孔表情识别的影响依旧显著;(2)当面孔表情与自然场景情绪不一致时,面孔识别需要更多依赖对自然场景的加工,因此对自然场景的加工程度更高;(3)身体动作会在一定程度上干扰自然场景对面孔表情识别的影响,但自然场景依然对情绪面孔的表情识别有重要作用。  相似文献   

13.
The episodic buffer component of working memory was examined in children with attention deficit/hyperactivity disorder (ADHD) and typically developing peers (TD). Thirty-two children (ADHD = 16, TD = 16) completed three versions of a phonological working memory task that varied with regard to stimulus presentation modality (auditory, visual, or dual auditory and visual), as well as a visuospatial task. Children with ADHD experienced the largest magnitude working memory deficits when phonological stimuli were presented via a unimodal, auditory format. Their performance improved during visual and dual modality conditions but remained significantly below the performance of children in the TD group. In contrast, the TD group did not exhibit performance differences between the auditory- and visual-phonological conditions but recalled significantly more stimuli during the dual-phonological condition. Furthermore, relative to TD children, children with ADHD recalled disproportionately fewer phonological stimuli as set sizes increased, regardless of presentation modality. Finally, an examination of working memory components indicated that the largest magnitude between-group difference was associated with the central executive. Collectively, these findings suggest that ADHD-related working memory deficits reflect a combination of impaired central executive and phonological storage/rehearsal processes, as well as an impaired ability to benefit from bound multimodal information processed by the episodic buffer.  相似文献   

14.
Using the Deese/Roediger‐McDermott (DRM) false memory method, Smith and Hunt ( 1998 ) first reported the modality effect on false memory and showed that false recall from DRM lists was lower following visual study than following auditory study, which led to numerous studies on the mechanism of modality effect on false memory and provided many competing explanations. In the present experiment, the authors tested the modality effect in false recognition by using a blocked presentation condition and a random presentation condition. The present experiment found a modality effect different from the results of the previous research; namely, false recognition was shown to be greater following visual study than following auditory study, especially in the blocked presentation condition rather than in the random presentation condition. The authors argued that this reversed modality effect may be due to different encoding and processing characteristics between Chinese characters and English words. Compared with English words, visual graphemes of critical lures in Chinese lists are likely to be activated and encoded in participants' minds, thus it is more difficult for participants to discriminate later inner graphemes from those items presented in visual modality. Hence visual presentation could lead to more false recognition than auditory presentation in Chinese lists. The results in the present experiment demonstrated that semantic activation occurring during the encoding and retrieve phases played an important role in modality effect in false recognition, and our findings might be explained by the activation‐monitoring account.  相似文献   

15.
The present study quantified the magnitude of sex differences in perceptual asymmetries as measured with dichotic listening. This was achieved by means of a meta-analysis of the literature dating back from the initial use of dichotic listening as a measure of laterality. The meta-analysis included 249 effect sizes pertaining to sex differences and 246 effect sizes for the main effect of laterality. The results showed small and homogeneous sex differences in laterality in favor of men (d=0.054). The main effect of laterality was of medium magnitude (d=0.609) but it was heterogeneous. Homogeneity for the main effect of laterality was achieved through partitioning as a function of task, demonstrating larger asymmetries for verbal (d=0.65) than for non-verbal tasks (d=0.45). The results are discussed with reference to top-down and bottom-up factors in dichotic listening. The possible influence of a publication bias is also discussed.  相似文献   

16.
It is thought that number magnitude is represented in an abstract and amodal way on a left-to-right oriented mental number line. Major evidence for this idea has been provided by the SNARC effect (Dehaene, Bossini, & Giraux, 1993): responses to relatively larger numbers are faster for the right hand, those to smaller numbers for the left hand, even when number magnitude is irrelevant. The SNARC effect has been used to index automatic access to a central semantic and amodal magnitude representation. However, this assumption of modality independence has never been tested and it remains uncertain if the SNARC effect exists in other modalities in a similar way as in the visual modality. We have examined this question by systematically varying modality/notation (auditory number word, visual Arabic numeral, visual number word, visual dice pattern) in a within-participant design. The SNARC effect was found consistently for all modality/notation conditions, including auditory presentation. The size of the SNARC effect in the auditory condition did not differ from the SNARC effect in any visual condition. We conclude that the SNARC effect is indeed a general index of a central semantic and amodal number magnitude representation.  相似文献   

17.
Although previous research on emotion recognition ability (ERA) has found consistent evidence for a female advantage, the explanation for this sex difference remains incompletely understood. This study compared males and females on four emotion recognition tasks, using a community sample of 379 adults drawn from two regions of the United States (stratified with respect to age, sex, and socioeconomic status). Participants also completed the Levels of Emotional Awareness Scale (LEAS), a measure of trait emotional awareness (EA) thought to primarily reflect individual differences in emotion concept learning. We observed that individual differences in LEAS scores mediated the relationship between sex and ERA; in addition, we observed that ERA distributions were noticeably non-normal, and that—similar to findings with other cognitive performance measures—males had more variability in ERA than females. These results further characterize sex differences in ERA and suggest that these differences may be explained by differences in EA—a trait variable linked primarily to early learning.  相似文献   

18.
Earlier studies on adults have shown sex differences in face recognition. Women tend to recognise more faces of other women than men do, whereas there are no sex differences with regard to male faces. In order to test the generality of earlier findings and to examine potential reasons for the observed pattern of sex differences, two groups of Swedish 9-year-old children (n = 101 and n = 96) viewed faces of either Swedish or Bangladeshi children and adults for later recognition. Results showed that girls outperformed boys in recognition of female faces, irrespective of ethnicity and age of the faces. Boys and girls recognised Swedish male faces to an equal extent, whereas girls recognised more Bangladeshi male faces than boys did. These results indicate that three factors explain the magnitude of sex differences in face recognition: an overall female superior face recognition ability, the correspondence between the sex of viewer and the gender of the face, and prior knowledge of the ethnicity of the face.  相似文献   

19.
BACKGROUND: People with autism or Asperger Syndrome (AS) show altered patterns of brain activity during visual search and emotion recognition tasks. Autism and AS are genetic conditions and parents may show the 'broader autism phenotype.' AIMS: (1) To test if parents of children with AS show atypical brain activity during a visual search and an empathy task; (2) to test for sex differences during these tasks at the neural level; (3) to test if parents of children with autism are hyper-masculinized, as might be predicted by the 'extreme male brain' theory. METHOD: We used fMRI during a visual search task (the Embedded Figures Test (EFT)) and an emotion recognition test (the 'Reading the Mind in the Eyes' (or Eyes) test). SAMPLE: Twelve parents of children with AS, vs. 12 sex-matched controls. DESIGN: Factorial analysis was used to map main effects of sex, group (parents vs. controls), and sexxgroup interaction on brain function. An ordinal ANOVA also tested for regions of brain activity where females>males>fathers=mothers, to test for parental hyper-masculinization. RESULTS ON EFT TASK: Female controls showed more activity in extrastriate cortex than male controls, and both mothers and fathers showed even less activity in this area than sex-matched controls. There were no differences in group activation between mothers and fathers of children with AS. The ordinal ANOVA identified two specific regions in visual cortex (right and left, respectively) that showed the pattern Females>Males>Fathers=Mothers, both in BA 19. RESULTS ON EYES TASK: Male controls showed more activity in the left inferior frontal gyrus than female controls, and both mothers and fathers showed even more activity in this area compared to sex-matched controls. Female controls showed greater bilateral inferior frontal activation than males. This was not seen when comparing mothers to males, or mothers to fathers. The ordinal ANOVA identified two specific regions that showed the pattern Females>Males>Mothers=Fathers: left medial temporal gyrus (BA 21) and left dorsolateral prefrontal cortex (BA 44). CONCLUSIONS: Parents of children with AS show atypical brain function during both visual search and emotion recognition, in the direction of hyper-masculinization of the brain. Because of the small sample size, and lack of age-matching between parents and controls, such results constitute a pilot study that needs replicating with larger samples.  相似文献   

20.
Gender differences in object location memory: A meta-analysis   总被引:1,自引:0,他引:1  
The goal of the present study was to quantify the magnitude of gender differences in object location memory tasks. A total of 123 effect sizes (d) drawn from 36 studies were included in a meta-analysis using a hierarchical approach. Object identity memory (37 effect sizes) and object location memory (86 effect sizes) tasks were analyzed separately. Object identity memory task showed significant gender differences that were homogeneous and in favor of women. For the object location memory tasks, effect sizes had to be partitioned by age (younger than 13, between 13 and 18, older than 18), object type (common, uncommon, gender neutral, geometric, masculine, feminine), scoring method (accuracy, time, distance), and type of measure (recall, recognition) to achieve homogeneity. Significant gender differences in favor of females were obtained in all clusters above the age of 13, with the exception of feminine, uncommon, and gender-neutral objects. Masculine objects and measures of distance produced significant effects in favor of males. Implications of these results for future work and for theoretical interpretations are discussed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号