首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 78 毫秒
1.
Interactions between the processing of emotion expression and form-based information from faces (facial identity) were investigated using the redundant-target paradigm, in which we specifically tested whether identity and emotional expression are integrated in a superadditive manner (Miller, Cognitive Psychology 14:247?C279, 1982). In Experiments 1 and 2, participants performed emotion and face identity judgments on faces with sad or angry emotional expressions. Responses to redundant targets were faster than responses to either single target when a universal emotion was conveyed, and performance violated the predictions from a model assuming independent processing of emotion and face identity. Experiment 4 showed that these effects were not modulated by varying interstimulus and nontarget contingencies, and Experiment 5 demonstrated that the redundancy gains were eliminated when faces were inverted. Taken together, these results suggest that the identification of emotion and facial identity interact in face processing.  相似文献   

2.
Previous studies demonstrated that the sequential verification of different sensory modality properties for concepts (e.g., BLENDER-loud; BANANA-yellow) brings about a processing cost, known as the modality-switch effect. We report an experiment designed to assess the influence of the mode of presentation (i.e., visual, aural) of stimuli on the modality-switch effect in a property verification and lexical decision priming paradigm. Participants were required to perform a property verification or a lexical decision task on a target sentence (e.g., “a BEE buzzes”, “a DIAMOND glistens”) presented either visually or aurally after having been presented with a prime sentence (e.g., “the LIGHT is flickering”, “the SOUND is echoing”) that could either share both, one or none of the target’s mode of presentation and content modality. Results show that the mode of presentation of stimuli affects the conceptual modality-switch effect. Furthermore, the depth of processing required by the task modulates the complex interplay of perceptual and semantic information. We conclude that the MSE is a task-related, multilevel effect which can occur on two different levels of information processing (i.e., perceptual and semantic).  相似文献   

3.
Many experiments have found that emotional experience affects self-focused attention. Several approaches to cognition and emotion predict that conscious emotional experience may be unnecessary for this effect. To test this hypothesis, two experiments primed emotion concepts without affecting emotional experience. In Experiment 1, subliminal exposure to sad faces (relative to happy faces and neutral faces) increased self-focused attention but not subjectively experienced affect. In Experiment 2, a scrambled-sentences task that primed happy and sad emotion concepts increased self-focused attention relative to a neutral task. Thus, simply activating knowledge about emotions was sufficient to increase self-focused attention. The discussion considers implications for research on how emotional states affect self-awareness.
Paul J. SilviaEmail:
  相似文献   

4.
To assess the impact of context information on emotion perception, participants saw a picture of a male or female person with either a neutral, happy or sad facial expression and received information about the context in which the picture was taken. Their task was to rate the emotion actually expressed in the photo (i.e., focal emotions) as well as emotions not actually expressed (i.e., non-focal emotions) and inferences extracted from them. We predicted and found that context information affected both the perception of emotions and the inferences that the observers drew from them. Perceivers used context information in order to make sense of what was perceived to the extent that in the case of neutral expressions and for non-focal emotions, they “see” things that do not actually exist.  相似文献   

5.
Depression has been associated with task-relevant increased attention toward negative information, reduced attention toward positive information, or reduced inhibition of task-irrelevant negative information. This study employed behavioural and psychophysiological measures (event-related potentials; ERP) to examine whether groups with risk factors for depression (past depression, current dysphoria) would show attentional biases or inhibitory deficits related to viewing facial expressions. In oddball task blocks, young adult participants responded to an infrequently presented target emotion (e.g., sad) and inhibited responses to an infrequently presented distracter emotion (e.g., happy) in the context of frequently presented neutral stimuli. Previous depression was uniquely associated with greater P3 ERP amplitude following sad targets, reflecting a selective attention bias. Also, dysphoric individuals less effectively inhibited responses to sad distracters than non-dysphoric individuals according to behavioural data, but not psychophysiological data. Results suggest that depression risk may be most reliably characterised by increased attention toward others' depressive facial emotion.  相似文献   

6.
The current research investigated the influence of body posture on adults' and children's perception of facial displays of emotion. In each of two experiments, participants categorized facial expressions that were presented on a body posture that was congruent (e.g., a sad face on a body posing sadness) or incongruent (e.g., a sad face on a body posing fear). Adults and 8-year-olds made more errors and had longer reaction times on incongruent trials than on congruent trials when judging sad versus fearful facial expressions, an effect that was larger in 8-year-olds. The congruency effect was reduced when faces and bodies were misaligned, providing some evidence for holistic processing. Neither adults nor 8-year-olds were affected by congruency when judging sad versus happy expressions. Evidence that congruency effects vary with age and with similarity of emotional expressions is consistent with dimensional theories and "emotional seed" models of emotion perception.  相似文献   

7.
Gable and Harmon-Jones (Psychological Science, 21(2), 211-215, 2010) reported that sadness broadens attention in a global-local letter task. This finding provided the key test for their motivational intensity account, which states that the level of spatial processing is not determined by emotional valence, but by motivational intensity. However, their finding is at odds with several other studies, showing no effect, or even a narrowing effect of sadness on attention. This paper reports two attempts to replicate the broadening effect of sadness on attention. Both experiments used a global-local letter task, but differed in terms of emotion induction: Experiment 1 used the same pictures as Gable and Harmon-Jones, taken from the IAPS dataset; Experiment 2 used a sad video underlaid with sad music. Results showed a sadness-specific global advantage in the error rates, but not in the reaction times. The same null results were also found in a South-Asian sample in both experiments, showing that effects on global/local processing were not influenced by a culturally related processing bias.  相似文献   

8.
Despite a wealth of knowledge about the neural mechanisms behind emotional facial expression processing, little is known about how they relate to individual differences in social cognition abilities. We studied individual differences in the event-related potentials (ERPs) elicited by dynamic facial expressions. First, we assessed the latent structure of the ERPs, reflecting structural face processing in the N170, and the allocation of processing resources and reflexive attention to emotionally salient stimuli, in the early posterior negativity (EPN) and the late positive complex (LPC). Then we estimated brain–behavior relationships between the ERP factors and behavioral indicators of facial identity and emotion-processing abilities. Structural models revealed that the participants who formed faster structural representations of neutral faces (i.e., shorter N170 latencies) performed better at face perception (r = –.51) and memory (r = –.42). The N170 amplitude was not related to individual differences in face cognition or emotion processing. The latent EPN factor correlated with emotion perception (r = .47) and memory (r = .32), and also with face perception abilities (r = .41). Interestingly, the latent factor representing the difference in EPN amplitudes between the two neutral control conditions (chewing and blinking movements) also correlated with emotion perception (r = .51), highlighting the importance of tracking facial changes in the perception of emotional facial expressions. The LPC factor for negative expressions correlated with the memory for emotional facial expressions. The links revealed between the latency and strength of activations of brain systems and individual differences in processing socio-emotional information provide new insights into the brain mechanisms involved in social communication.  相似文献   

9.
Can the Simultaneous Experience of Opposing Emotions Really Occur?   总被引:1,自引:1,他引:0  
Various investigators have proposed that people may feel simultaneous positive and negative affect. However, experimental evidence from tests of a recent theory about the intensity of emotion (J. W. Brehm, 1999) suggests that even when they are invited by the experimental design, positive and negative emotions do not occur at the same time. When people have been instigated to feel a particular emotion, such as happiness, and then are given a reason (e.g., sad news) for not feeling happy, they report continued happiness but no increase in sadness unless the reason for feeling sad is very great, in which case sadness replaces happiness. The present paper briefly reviews the underlying theory and evidence, and discusses implications.
Jack W. BrehmEmail:
  相似文献   

10.
This paper elaborates a recent conceptualization of feature-based attention in terms of attention filters (Drew et al., Journal of Vision, 10(10:20), 1–16, 2010) into a general purpose centroid-estimation paradigm for studying feature-based attention. An attention filter is a brain process, initiated by a participant in the context of a task requiring feature-based attention, which operates broadly across space to modulate the relative effectiveness with which different features in the retinal input influence performance. This paper describes an empirical method for quantitatively measuring attention filters. The method uses a “statistical summary representation” (SSR) task in which the participant strives to mouse-click the centroid of a briefly flashed cloud composed of items of different types (e.g., dots of different luminances or sizes), weighting some types of items more strongly than others. In different attention conditions, the target weights for different item types in the centroid task are varied. The actual weights exerted on the participant’s responses by different item types in any given attention condition are derived by simple linear regression. Because, on each trial, the centroid paradigm obtains information about the relative effectiveness of all the features in the display, both target and distractor features, and because the participant’s response is a continuous variable in each of two dimensions (versus a simple binary choice as in most previous paradigms), it is remarkably powerful. The number of trials required to estimate an attention filter is an order of magnitude fewer than the number required to investigate much simpler concepts in typical psychophysical attention paradigms.  相似文献   

11.
This study demonstrated that semantic transparency as a linguistic property modulates the recognition memory for two-character Chinese words, with opaque words (i.e., words whose meanings cannot be derived from constituent characters—e.g., “光[/guang/, light]棍[/gun/, stick]”, bachelor) remembered better than transparent words (i.e., words whose meanings can be derived from constituent characters—e.g., “茶[/cha/, tea]杯[/bei/, cup]”, teacup). In Experiment 1, the participants made lexical decisions on transparent words, opaque words, and nonwords in the study and then engaged in an old/new recognition test. Experiment 2 employed a concreteness judgment as the encoding task to ensure equivalent semantic processing for opaque and transparent words. In Experiment 3, the neighborhood size of the two-character words was manipulated together with their semantic transparency. In all three experiments, opaque words were found to be better remembered than transparent words. We concluded that the conceptual incongruence between the meanings of a whole word and its constituent characters made opaque words more distinctive and, hence, better remembered than transparent words.  相似文献   

12.
Two experiments were directed at early phonological activation in the semantic categorization task. In Experiment 1, briefly exposed targets homophonic to category exemplars (rows for the categorya flower), and their graphemic controls (robs), were judged for category membership with and without a backward pattern mask False positives were greater for rows than robs to the same degree under both unmasked and masked conditions. In Experiment 2, false positives were examined in the semantic categorization task under backward dichoptic masking by pseudowords that were, in turn, masked monoptically by a pattern mask. Briefly exposed homophones (e.g.,weak), masked by a phonologically similar pseudoword (“feek”), a graphemic control (“felk”), or an unrelated pseudoword (“furt”), were categorized as category exemplars (a unit of time). The difference in false positives was significant forweak-feek versusweak-furt, but not forweak-felk versusweak-furt. It was suggested that the persistence of the homophonic effects under the pattern masking of Experiment 1 and their amplification under the phonological masking of Experiment 2 were because phonological codes cohere rapidly and provide, thereby, immediately available constraints on semantic processing.  相似文献   

13.
The Morris water maze has been put forward in the philosophy of neuroscience as an example of an experimental arrangement that may be used to delineate the cognitive faculty of spatial memory (e.g., Craver and Darden, Theory and method in the neurosciences, University of Pittsburgh Press, Pittsburgh, 2001; Craver, Explaining the brain: Mechanisms and the mosaic unity of neuroscience, Oxford University Press, Oxford, 2007). However, in the experimental and review literature on the water maze throughout the history of its use, we encounter numerous responses to the question of “what” phenomenon it circumscribes ranging from cognitive functions (e.g., “spatial learning”, “spatial navigation”), to representational changes (e.g., “cognitive map formation”) to terms that appear to refer exclusively to observable changes in behavior (e.g., “water maze performance”). To date philosophical analyses of the water maze have not been directed at sorting out what phenomenon the device delineates nor the sources of the different answers to the question of what. I undertake both of these tasks in this paper. I begin with an analysis of Morris’s first published research study using the water maze and demonstrate that he emerged from it with an experimental learning paradigm that at best circumscribed a discrete set of observable changes in behavior. However, it delineated neither a discrete set of representational changes nor a discrete cognitive function. I cite this in combination with a reductionist-oriented research agenda in cellular and molecular neurobiology dating back to the 1980s as two sources of the lack of consistency across the history of the experimental and review literature as to what is under study in the water maze.  相似文献   

14.
Although there has been steady progress elucidating the influence of emotion on cognition, it remains unclear precisely when and why emotion impairs or facilitates cognition. The present study investigated the mechanisms involved in the influence of emotion on perception and working memory (WM), using modified 0-back and 2-back tasks, respectively. First, results showed that attentional focus modulated the impact of emotion on perception. Specifically, emotion facilitated perceptual task performance when it was relevant to the task, but it impaired performance when it was irrelevant to the task. The differential behavioural effect of emotion on perception as a function of attentional focus diminished under high WM load. Second, attentional focus did not directly modulate the impact of emotion on WM, but rather its influence depended on the dynamic relationship between internal representations. Specifically, WM performance was worse when the material already being held online and the new input were of matching emotions (e.g. both were negative), compared to when they were not. We propose that the competition between “bottom-up” and “top-down” processing for limited cognitive resources explains the nature of the influence of emotion on both perception and WM.  相似文献   

15.
Two experiments investigated the effect of visuospatial attention on redundancy gain in simple reaction time tasks. In each trial participants were given a central arrow cue indicating where a stimulus would most likely be presented (i.e., upper or lower half of the display in Experiment 1; left or right half of the display in Experiment 2). Then, a single stimulus or two redundant stimuli could be presented in either expected or unexpected locations. Replicating previous findings, responses were faster when stimuli appeared in expected rather than unexpected locations, and they were also faster when two redundant stimuli were presented than when only one was. Critically, redundancy gain was statistically equivalent for stimuli in expected and unexpected locations, suggesting that the effect of redundancy gain arises after the perceptual processes influenced by the allocation of visuospatial attention.
Jeff MillerEmail:
  相似文献   

16.

Background

Attention to relevant emotional information in the environment is an important process related to vulnerability and resilience for mood and anxiety disorders. In the present study, the effects of left and right dorsolateral prefrontal cortex (i.e., DLPFC) stimulation on attentional mechanisms of emotional processing were tested and contrasted.

Methods

A sample of 54 healthy participants received 20 min of active and sham anodal transcranial direct current stimulation (i.e., tDCS) either of the left (n = 27) or of the right DLPFC (n = 27) on two separate days. The anode electrode was placed over the left or the right DLPFC, the cathode over the corresponding contra lateral supraorbital area. After each neurostimulation session, participants completed an eye-tracking task assessing direct processes of attentional engagement towards and attentional disengagement away from emotional faces (happy, disgusted, and sad expressions).

Results

Compared to sham, active tDCS over the left DLPFC led to faster gaze disengagement, whereas active tDCS over the right DLPFC led to slower gaze disengagement from emotional faces. Between-group comparisons showed that such inverse change patterns were significantly different and generalized for all types of emotion.

Conclusions

Our findings support a lateralized role of left and right DLPFC activity in enhancing/worsening the top-down regulation of emotional attention processing. These results support the rationale of new therapies for affective disorders aimed to increase the activation of the left over the right DLPFC in combination with attentional control training, and identify specific target attention mechanisms to be trained.
  相似文献   

17.
This study analyzed the strategies that children ages 5 through 8 years used on two modified versions of Inhelder and Piaget's (The early growth of logic in the child. New York: Norton, 1964) class inclusion task. In two experiments, children were tested on Wilkinson's (Cognitive Psychology, 1976, 8, 64–85) “percept” inclusion task in which distinctive features marked both supraordinate and subclasses. It was hypothesized that children who fail standard Piagetian inclusion tasks succeed on the “percept” task by counting and comparing mutually exclusive features rather than using features as markers for classes and subclasses. The hypothesis was supported by children's performances on “percept” tasks in which solutions based on feature counting conflicted with solutions based on consideration of class inclusion relations. In two other experiments, children answered part-whole and part-part comparison questions in which both terms were described as classes and/or subclasses, or in which one of the two terms was described as a collection (e.g., a bunch of grapes). These experiments contrasted Markman and Seibert's (Cognitive Psychology, 1976, 8, 561–577) “organization” hypothesis that the greater psychological integrity of collections facilitates reasoning on part-whole comparison problems with the hypothesis that the faciltative effect results from the “large number” connotation of collective nouns. Results on collection problems in which parts were described as collections supported the “large number” hypothesis. Results were discussed in terms of their implications for Piaget's theory.  相似文献   

18.
Konkle, Brady, Alvarez and Oliva (Psychological Science, 21, 1551–1556, 2010) showed that participants have an exceptional long-term memory (LTM) for photographs of scenes. We examined to what extent participants’ exceptional LTM for scenes is determined by presentation time during encoding. In addition, at retrieval, we varied the nature of the lures in a forced-choice recognition task so that they resembled the target in gist (i.e., global or categorical) information, but were distinct in verbatim information (e.g., an “old” beach scene and a similar “new” beach scene; exemplar condition) or vice versa (e.g., a beach scene and a new scene from a novel category; novel condition). In Experiment 1, half of the list of scenes was presented for 1 s, whereas the other half was presented for 4 s. We found lower performance for shorter study presentation time in the exemplar test condition and similar performance for both study presentation times in the novel test condition. In Experiment 2, participants showed similar performance in an exemplar test for which the lure was of a different category but a category that was used at study. In Experiment 3, when presentation time was lowered to 500 ms, recognition accuracy was reduced in both novel and exemplar test conditions. A less detailed memorial representation of the studied scene containing more gist (i.e., meaning) than verbatim (i.e., surface or perceptual details) information is retrieved from LTM after a short compared to a long study presentation time. We conclude that our findings support fuzzy-trace theory.  相似文献   

19.
In 1997, David Foster Wallace published “The Depressed Person,” a short story about a privileged, deeply unhappy woman dedicated to exploring and recounting the texture and etiology of her chronic depression. This essay argues that “The Depressed Person” challenges the long-standing assumption that narrativizing the pain of depression is crucial to overcoming it, and the contemporary view that empathic responses from others promote recovery of the depressed. Taken together, these two critiques inform Wallace’s portrayal of chronic depression as an interactive phenomenon that is articulated, sustained, and regenerated through problematic contexts of interaction. Written at a time when public knowledge of and talk about depression was surging, “The Depressed Person” holds an important, if presently under-recognized place, in the expansive corpus of depression texts that emerged in the 1990s.  相似文献   

20.
Humans typically combine linguistic and nonlinguistic information to comprehend emotions. We adopted an emotion identification Stroop task to investigate how different channels interact in emotion communication. In experiment 1, synonyms of “happy” and “sad” were spoken with happy and sad prosody. Participants had more difficulty ignoring prosody than ignoring verbal content. In experiment 2, synonyms of “happy” and “sad” were spoken with happy and sad prosody, while happy or sad faces were displayed. Accuracy was lower when two channels expressed an emotion that was incongruent with the channel participants had to focus on, compared with the cross-channel congruence condition. When participants were required to focus on verbal content, accuracy was significantly lower also when prosody was incongruent with verbal content and face. This suggests that prosody biases emotional verbal content processing, even when conflicting with verbal content and face simultaneously. Implications for multimodal communication and language evolution studies are discussed.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号