首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The ability to recognize mental states from facial expressions is essential for effective social interaction. However, previous investigations of mental state recognition have used only static faces so the benefit of dynamic information for recognizing mental states remains to be determined. Experiment 1 found that dynamic faces produced higher levels of recognition accuracy than static faces, suggesting that the additional information contained within dynamic faces can facilitate mental state recognition. Experiment 2 explored the facial regions that are important for providing dynamic information in mental state displays. This involved using a new technique to freeze motion in a particular facial region (eyes, nose, mouth) so that this region was static while the remainder of the face was naturally moving. Findings showed that dynamic information in the eyes and the mouth was important and the region of influence depended on the mental state. Processes involved in mental state recognition are discussed.  相似文献   

2.
Two experiments (Experiment 1 N?=?149, Experiment 2 N?=?141) investigated how two mental states that underlie how perceivers reason about intentional action (awareness of action and desire for an outcome) influence blame and punishment for unintended (i.e., negligent) harms, and the role of anger in this process. Specifically, this research explores how the presence of awareness (of risk in acting, or simply of acting) and/or desire in an acting agent's mental states influences perceptions of negligence, judgements that the acting agent owes restitution to a victim, and the desire to punish the agent, mediated by anger. In both experiments, awareness and desire led to increased anger at the agent and increased perception of negligence. Anger mediated the effect of awareness and desire on negligence rather than negligence mediating the effect of mental states on anger. Anger also mediated punishment, and negligence mediated the effects of anger on restitution. We discuss how perceivers consider mental states such as awareness, desire, and knowledge when reasoning about blame and punishment for unintended harms, and the role of anger in this process.  相似文献   

3.
《Visual cognition》2013,21(3):311-331
Previous work suggests that a range of mental states can be read from facial expressions, beyond the “basic emotions”. Experiment 1 tested this in more detail, by using a standardized method, and by testing the role of face parts (eyes vs. mouth vs. the whole face). Adult subjects were shown photographs of an actress posing 10 basic emotions (happy, sad, angry, afraid, etc.) and 10 complex mental states (scheme, admire, interest, thoughtfulness, etc.). For each mental state, each subject was shown the whole face, the eyes alone, or the mouth alone, and were given a forced choice of two mental state terms. Results indicated that: (1) Subjects show remarkable agreement in ascribing a wide range of mental states to facial expressions, (2) for the basic emotions, the whole face is more informative than either the eyes or the mouth, (3) for the complex mental states, seeing the eyes alone produced significantly better performance than seeing the mouth alone, and was as informative as the whole face. In Experiment 2, the eye-region effect was re-tested, this time using an actor's face, in order to test if this effect generalized across faces of different sex. Results were broadly similar to those found in Experiment 1. In Experiment 3, adults with autism or Asperger Syndrome were testedusing the same procedure as Experiment1. Results showed a significant impairment relative to normal adults on the complex mental states, and this was most marked on the eyes-alone condition. The results from all three experiments are discussed in relation to the role or perception in the use of our everyday “theory of mind”, and the role of eye-contact in this.  相似文献   

4.
We examined whether facial expressions of performers influence the emotional connotations of sung materials, and whether attention is implicated in audio-visual integration of affective cues. In Experiment 1, participants judged the emotional valence of audio-visual presentations of sung intervals. Performances were edited such that auditory and visual information conveyed congruent or incongruent affective connotations. In the single-task condition, participants judged the emotional connotation of sung intervals. In the dual-task condition, participants judged the emotional connotation of intervals while performing a secondary task. Judgements were influenced by melodic cues and facial expressions and the effects were undiminished by the secondary task. Experiment 2 involved identical conditions but participants were instructed to base judgements on auditory information alone. Again, facial expressions influenced judgements and the effect was undiminished by the secondary task. The results suggest that visual aspects of music performance are automatically and preattentively registered and integrated with auditory cues.  相似文献   

5.
Can face actions that carry significance within language be perceived categorically? We used continua produced by computational morphing of face-action images to explore this question in a controlled fashion. In Experiment 1 we showed that question-type-a syntactic distinction in British Sign Language (BSL)-can be perceived categorically, but only when it is also identified as a question marker. A few hearing non-signers were sensitive to this distinction; among those who used sign, late sign learners were no less sensitive than early sign users. A very similar facial-display continuum between 'surprise' and 'puzzlement' was perceived categorically by deaf and hearing participants, irrespective of their sign experience (Experiment 2). The categorical processing of facial displays can be demonstrated for sign, but may be grounded in universally perceived distinctions between communicative face actions. Moreover, the categorical perception of facial actions is not confined to the six universal facial expressions.  相似文献   

6.
We investigated in two picture–word-interference experiments whether there is evidence for composition during compound production (Experiment 1). In Experiment 2, we tried to determine the level at which composition takes place. In Experiment 1, shared morphemes between distractor and target (HANDTASCHE, handbag) sped up naming regardless of category membership: semantically opaque (Plaudertasche, chatterbox) and semantically transparent distractors (Reisetasche, travelling bag) facilitated picture naming to a comparable degree. In Experiment 2, targets (BROTMESSER, bread knife) were combined with a simple word distractor (Kuchen, cake) categorically related to a target compound constituent but not related to the compound as a whole. This target was further paired with a categorically related compound distractor (Kochlöffel, wooden spoon). The simple word distractor was also paired with a categorically related compound constituent (BROT, bread). Whenever distractor (e.g., Kochlöffel, Kuchen) and target (BROTMESSER, BROT, respectively) were categorically related, semantic inhibition was observed. Distractors (e.g., Kuchen) related to only one compound constituent did not affect compound production. Taken together, our results indicate that during compound production a single lemma node is activated and that morphological composition takes place at the form level of representation. Current lexical selection mechanisms in language production models are not supported by these data.  相似文献   

7.
A familiar hypothesis about the recognition of distractor items as "new" is that it depends heavily on a metacognitive strategy in which the memorability or salience of the distractor is evaluated: if the item was deemed salient or memorable and yet no memory trace for it can be found, then it must not have been studied (e.g. Strack & Bless, 1994). In four experiments, no evidence was found to support this metamemory hypothesis. Experiments 1a, 1b, and 2 demonstrated that the judged salience of the stimuli did not predict participants' recognition judgements for distractors. In Experiments 3a and 3b, instructional manipulations designed to affect the ostensible metacognitive process failed to affect the recognition judgements. Finally, Experiment 4 indicated that confidence judgements do not support the predictions of the metamemory hypothesis.  相似文献   

8.
Build-up of proactive interference (PI) with visual-picture and auditory-verbal input modalities and the subsequent release from PI following a change in modality was investigated in three experiments with boys and girls, as follows: Experiment I (n = 64) at two mean age levels, 7–6 and 10–5; Experiment II (n = 64) at mean age 7–6; and Experiment III (n = 48) at age 11–4. PI build-up occurred in both modalities for all ages tested. Release from PI occurred following a change from auditory to visual input but not following a visual to auditory shift. In the final experiment, this asymmetrical improvement in performance was dependent upon an interaction between the modality of the input and distractor task on the final or release trial; changing to visual input produced a release effect regardless of the distractor task modality, while auditory input was associated with improvement in recall if a visual distractor task was employed whether or not a shift in input modality had occurred. This improvement was hypothesized to represent a decrease in retroactive interference rather than a release from proactive interference.  相似文献   

9.
Perceiving emotions correctly is foundational to the development of interpersonal skills. Five-month-old infants’ abilities to recognize, discriminate and categorize facial expressions of smiling were tested in three coordinated experiments. Infants were habituated to four degrees of smiling modeled by the same or different people; following habituation, infants were presented with a new degree of smile worn by the same and by a new person (Experiment 1), a new degree of smile and a fearful expression worn by the same person (Experiment 2) or a new degree of smile and a fearful expression worn by new people (Experiment 3). Infants showed significant novelty preferences for the new person smiling and for the fearful expressions over the new degree of smiling. These findings indicate that infants at 5 months can categorize the facial expression of smiling in static faces, and yet recognize the same person despite changes in facial expression; this is the youngest age at which these abilities have been demonstrated. The findings are discussed in light of the significance of emotion expression face processing in social interaction and infants’ categorization of faces.  相似文献   

10.
In two experiments we investigated recognition and classification judgements using an artificial grammar learning paradigm. In Experiment 1, when only new test items had to be judged, analysis of z-transformed receiver operating characteristics (z-ROCs) revealed no differences between classification and recognition. In Experiment 2, where we included old test items, z-ROCs in the two tasks differed, suggesting that judgements relied on different types of information. The results are interpreted in terms of heuristics that people use when making classification and recognition judgements.  相似文献   

11.
Two experiments investigated the role that different face regions play in a variety of social judgements that are commonly made from facial appearance (sex, age, distinctiveness, attractiveness, approachability, trustworthiness, and intelligence). These judgements lie along a continuum from those with a clear physical basis and high consequent accuracy (sex, age) to judgements that can achieve a degree of consensus between observers despite having little known validity (intelligence, trustworthiness). Results from Experiment 1 indicated that the face's internal features (eyes, nose, and mouth) provide information that is more useful for social inferences than the external features (hair, face shape, ears, and chin), especially when judging traits such as approachability and trustworthiness. Experiment 2 investigated how judgement agreement was affected when the upper head, eye, nose, or mouth regions were presented in isolation or when these regions were obscured. A different pattern of results emerged for different characteristics, indicating that different types of facial information are used in the various judgements. Moreover, the informativeness of a particular region/feature depends on whether it is presented alone or in the context of the whole face. These findings provide evidence for the importance of holistic processing in making social attributions from facial appearance.  相似文献   

12.
In two experiments we systematically explored whether people consider the format of text materials when judging their text learning, and whether doing so might inappropriately bias their judgements. Participants studied either text with diagrams (multimedia) or text alone and made both per-paragraph judgements and global judgements of their text learning. In Experiment 1 they judged their learning to be better for text with diagrams than for text alone. In that study, however, test performance was greater for multimedia, so the judgements may reflect either a belief in the power of multimedia or on-line processing. Experiment 2 replicated this finding and also included a third group that read texts with pictures that did not improve text performance. Judgements made by this group were just as high as those made by participants who received the effective multimedia format. These results confirm the hypothesis that people's metacomprehension judgements can be influenced by their beliefs about text format. Over-reliance on this multimedia heuristic, however, might reduce judgement accuracy in situations where it is invalid.  相似文献   

13.
The congruency effect of a task-irrelevant distractor has been found to be modulated by task-relevant set size and display set size. The present study used a psychological refractory period (PRP) paradigm to examine the cognitive loci of the display set size effect (dilution effect) and the task-relevant set size effect (perceptual load effect) on distractor interference. A tone discrimination task (Task 1), in which a response was made to the pitch of the target tone, was followed by a letter discrimination task (Task 2) in which different types of visual target display were used. In Experiment 1, in which display set size was manipulated to examine the nature of the display set size effect on distractor interference in Task 2, the modulation of the congruency effect by display set size was observed at both short and long stimulus-onset asynchronies (SOAs), indicating that the display set size effect occurred after the target was selected for processing in the focused attention stage. In Experiment 2, in which task-relevant set size was manipulated to examine the nature of the task-relevant set size effect on distractor interference in Task 2, the effects of task-relevant set size increased with SOA, suggesting that the target selection efficiency in the preattentive stage was impaired with increasing task-relevant set size. These results suggest that display set size and task-relevant set size modulate distractor processing in different ways.  相似文献   

14.
Identification of the second of two targets (T1, T2) inserted in a stream of distractors is impaired when presented 200–500 ms after the first (attentional blink, AB). An AB-like effect has been reported by Nieuwenstein, Potter, and Theeuwes, Journal of Experimental Psychology: Human Perception and Performance, 35, 159-169, (2009, Experiment 2), with a distractor stream that contained only one target and a gap just before the target. Nieuwenstein et al. hypothesized that the gap enhanced the salience of the last distractor, causing it to be processed much like T1 in conventional AB studies. This hypothesis can also account for Lag-1 sparing (enhanced target performance when presented directly after the last distractor, without an intervening gap). We propose an alternative account of the Lag-1 sparing in the single-target paradigm based on observer strategy, and test it by presenting the single-target and dual-target conditions to separate groups (Experiment 2) instead of mixed across trials (Experiment 1 and Nieuwenstein et al.'s study). The single-target condition exhibited Lag-1 sparing when it was mixed with the dual-target condition, but Lag-1 deficit when it was done in isolation. This outcome is consistent with an observer-strategy account but not with an enhanced salience account of the Lag-1 sparing effect in the single-target condition.  相似文献   

15.
Do sexual words have high attentional priority? How does the ability to ignore sexual distractors evolve with age? To answer these questions, two experiments using Rapid Serial Visual Presentation (RSVP) were conducted. Experiment 1 showed that both younger and older participants were better at identifying a target (the name of a colour) when it was preceded by 336 ms by a sexual word rather than by a musical word. Strikingly, the sexual‐word advantage was more pronounced for older adults than for younger adults. Experiment 2 showed that introducing a variable delay between the distractor and the target eliminated the sexual‐word advantage. This finding suggests that the sexual‐word advantage found in Experiment 1 was due to learning to utilize the sexual word as a temporal cue with a fixed duration between the distractor and the target. Contrary to previous research [Arnell et al., 2007, Emotion, 7, 465), neither experiment showed that sexual words produce an attentional blink.  相似文献   

16.
It has been suggested that action possibility judgements are formed through a covert simulation of the to-be-executed action. We sought to determine whether the motor system (via a common coding mechanism) influences this simulation, by investigating whether action possibility judgements are influenced by experience with the movement task (Experiments 1 and 2) and current body states (Experiment 3). The judgement task in each experiment involved judging whether it was possible for a person's hand to accurately move between two targets at presented speeds. In Experiment 1, participants completed the action judgements before and after executing the movement they were required to judge. Results were that judged movement times after execution were closer to the actual execution time than those prior to execution. The results of Experiment 2 suggest that the effects of execution on judgements were not due to motor activation or perceptual task experience—alternative explanations of the execution-mediated judgement effects. Experiment 3 examined how judged movement times were influenced by participants wearing weights. Results revealed that wearing weights increased judged movement times. These results suggest that the simulation underlying the judgement process is connected to the motor system, and that simulations are dynamically generated, taking into account recent experience and current body state.  相似文献   

17.
Two experiments investigated the role that different face regions play in a variety of social judgements that are commonly made from facial appearance (sex, age, distinctiveness, attractiveness, approachability, trustworthiness, and intelligence). These judgements lie along a continuum from those with a clear physical basis and high consequent accuracy (sex, age) to judgements that can achieve a degree of consensus between observers despite having little known validity (intelligence, trustworthiness). Results from Experiment 1 indicated that the face's internal features (eyes, nose, and mouth) provide information that is more useful for social inferences than the external features (hair, face shape, ears, and chin), especially when judging traits such as approachability and trustworthiness. Experiment 2 investigated how judgement agreement was affected when the upper head, eye, nose, or mouth regions were presented in isolation or when these regions were obscured. A different pattern of results emerged for different characteristics, indicating that different types of facial information are used in the various judgements. Moreover, the informativeness of a particular region/feature depends on whether it is presented alone or in the context of the whole face. These findings provide evidence for the importance of holistic processing in making social attributions from facial appearance.  相似文献   

18.
We present three experiments in which subjects were asked to make speeded sex judgements (Experiment 1) or semantic judgements (Experiments 2 and 3) to face targets and nonface items, while ignoring a solitary flanking distractor face or a nonface stimulus. Distractors could be either congruent (same response category) or incongruent (different response category) with the target. Distractor congruency effects were consistently observed in all combinations of target-distractor stimulus pairs, except when a distractor face flanked a target face. The failure to find congruency effects in this condition was explored further in a fourth experiment, in which four task-irrelevant flankers were simultaneously presented. Once again, no face-face congruency effects were found, even though comparison distractors interfered with face and nonface targets alike. However, four simultaneously presented distractor faces did not interfere with nonface targets either. We suggest that these experiments demonstrate a capacity limit for visual processing in these conditions, such that no more than one face is processed at a time.  相似文献   

19.
This study examines suppression in object-based attention in three experiments using an object-based attention task similar to Egly, R., Driver, J., & Rafal, R. D. (1994. Shifting visual attention between objects and locations: Evidence from normal and parietal lesion subjects. Journal of Experimental Psychology: General, 123(2), 161–177. doi:10.1037/0096-3445.123.2.161) with the addition of a distractor. In Experiment 1 participants identified a target object at one of four ends of two rectangles. The target location was validly cued on 72% of trials. The remaining 28% were located on the same or a different object. Sixty-eight percent of trials also included a distractor on one of the two objects. Experiment 1 failed to show suppression when a distractor was present, but did demonstrate the spread of attention across the attended object when no distractor was present. Experiment 2 added a mask to the paradigm to make the task more difficult and engage suppression. When suppression was engaged in the task, data showed suppression on the unattended (different) object, but not on the attended (same) object. Experiment 3 replicated findings from Experiment 1 and 2 using a within participants design. Findings are discussed in relation to the role of suppression in visual selective attention.  相似文献   

20.
We report two experiments on the relationship between allocentric/egocentric frames of reference and categorical/coordinate spatial relations. Jager and Postma (2003) suggest two theoretical possibilities about their relationship: categorical judgements are better when combined with an allocentric reference frame and coordinate judgements with an egocentric reference frame (interaction hypothesis); allocentric/egocentric and categorical/coordinate form independent dimensions (independence hypothesis). Participants saw stimuli comprising two vertical bars (targets), one above and the other below a horizontal bar. They had to judge whether the targets appeared on the same side (categorical) or at the same distance (coordinate) with respect either to their body-midline (egocentric) or to the centre of the horizontal bar (allocentric). The results from Experiment 1 showed a facilitation in the allocentric and categorical conditions. In line with the independence hypothesis, no interaction effect emerged. To see whether the results were affected by the visual salience of the stimuli, in Experiment 2 the luminance of the horizontal bar was reduced. As a consequence, a significant interaction effect emerged indicating that categorical judgements were more accurate than coordinate ones, and especially so in the allocentric condition. Furthermore, egocentric judgements were as accurate as allocentric ones with a specific improvement when combined with coordinate spatial relations. The data from Experiment 2 showed that the visual salience of stimuli affected the relationship between allocentric/egocentric and categorical/coordinate dimensions. This suggests that the emergence of a selective interaction between the two dimensions may be modulated by the characteristics of the task.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号