首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   23篇
  免费   1篇
  2020年   1篇
  2018年   1篇
  2017年   1篇
  2016年   2篇
  2013年   4篇
  2012年   1篇
  2011年   2篇
  2009年   1篇
  2008年   2篇
  2007年   1篇
  2004年   1篇
  2003年   2篇
  2002年   1篇
  2001年   2篇
  2000年   1篇
  1999年   1篇
排序方式: 共有24条查询结果,搜索用时 15 毫秒
1.
Subjects were asked to detect visual, laterally presented reaction signals preceded by head-body cue stimuli in a spatial cueing task. A head rotated towards the reaction signal combined with a front view of a body resulted in shorter reaction times in comparison to the front view of a head and body. In contrast, a cue showing the head and body rotated towards the reaction signal did not result in such a facilitation in reaction times. The results suggest that the brain mechanisms involved in social attention orienting integrate ventrally processed visual information from the head and body orientation. A cue signaling that the other person, in his or her frame of reference, has an averted attention direction shifts the observer's own attention in the same direction.  相似文献   
2.
In 6 experiments, the authors investigated whether attention orienting by gaze direction is modulated by the emotional expression (neutral, happy, angry, or fearful) on the face. The results showed a clear spatial cuing effect by gaze direction but no effect by facial expression. In addition, it was shown that the cuing effect was stronger with schematic faces than with real faces, that gaze cuing could be achieved at very short stimulus onset asynchronies (14 ms), and that there was no evidence for a difference in the strength of cuing triggered by static gaze cues and by cues involving apparent motion of the pupils. In sum, the results suggest that in normal, healthy adults, eye direction processing for attention shifts is independent of facial expression analysis.  相似文献   
3.
The present aim was to investigate how emotional expressions presented on an unattended channel affect the recognition of the attended emotional expressions. In Experiments 1 and 2, facial and vocal expressions were simultaneously presented as stimulus combinations. The emotions (happiness, anger, or emotional neutrality) expressed by the face and voice were either congruent or incongruent. Subjects were asked to attend either to the visual (Experiment 1) or auditory (Experiment 2) channel and recognise the emotional expression. The result showed that the ignored emotional expressions significantly affected the processing of attended signals as measured by recognition accuracy and response speed. In general, attended signals were recognised more accurately and faster in congruent than in incongruent combinations. In Experiment 3, possibility for a perceptual-level integration was eliminated by presenting the response-relevant and response-irrelevant signals separated in time. In this situation, emotional information presented on the nonattended channel ceased to affect the processing of emotional signals on the attended channel. The present results are interpreted to provide evidence for the view that facial and vocal emotional signals are integrated at the perceptual level of information processing and not at the later response-selection stages.  相似文献   
4.
Two theoretical frameworks that examine the nature of adaptability and mutual influence in interaction, interpersonal deception theory and interaction adaptation theory, were used to derive hypotheses concerning patterns of interaction that occur across time in truthful and deceptive conversations. Two studies were conducted in which senders were either truthful or deceptive in their interactions with a partner who increased or decreased involvement during the latter half of the conversation. Results revealed that deceivers felt more anxious and were more concerned about self‐presentation than truthtellers prior to the interaction and displayed less initial involvement than truthtellers. Patterns of interaction were also moderated by deception. Deceivers increased involvement over time but also reciprocated increases or decreases in receiver involvement. However, deceivers were less responsive than truthtellers to changes in receiver behavior. Finally, partner involvement served as feedback to senders regarding their own performance.  相似文献   
5.
Emotionally expressive faces have shown enhanced detectability over neutral faces, but little is known about the effect of eye gaze on detecting the presence of emotional faces. Emotional expressions and gaze direction are both cues to the intentions of another person, and gaze direction has been shown to affect recognition accuracy and perceived intensity of emotional faces. The current study showed that fearful faces were detected more frequently with an averted gaze than with a direct gaze in an attentional blink task, whereas angry and happy faces were detected more frequently with a direct gaze than with an averted gaze. The results are in line with the shared signal hypothesis and appraisal theory and suggest that selection for awareness was based on a rapid evaluation of the intentions of another person as conveyed by their facial expression and gaze direction.  相似文献   
6.
Three experiments examined the recognition speed advantage for happy faces. The results replicated earlier findings by showing that positive (happy) facial expressions were recognized faster than negative (disgusted or sad) facial expressions (Experiments 1 and 2). In addition, the results showed that this effect was evident even when low-level physical differences between positive and negative faces were controlled by using schematic faces (Experiment 2), and that the effect was not attributable to an artifact arising from facilitated recognition of a single feature in the happy faces (up-turned mouth line, Experiment 3). Together, these results suggest that the happy face advantage may reflect a higher-level asymmetry in the recognition and categorization of emotionally positive and negative signals.  相似文献   
7.
8.
In this study, we investigated gaze-cued attention orienting when the perceived eyes are not looking in the same direction. This condition occurs in strabismus (squint). Participants were asked to detect laterally presented reaction signals preceded by schematic faces in which the direction (left, straight, or right) of the left and right eye was independently manipulated. Consistent with earlier studies, the results showed a reliable cuing effect by two eyes with parallel gaze direction. Gaze-cued orienting was also shown in a situation when one eye was averted and the other eye was looking straight ahead. The gaze cuing was not significantly stronger in the former than in the latter situation. When both eyes were either nasally or temporally averted, no shifts of visual attention were observed. The results suggest that, if both eyes are visible, the direction of both eyes is computed and integrated for the gaze-cued orienting.  相似文献   
9.
ABSTRACT

Two paradigms have shown that people automatically compute what or where another person is looking at. In the visual perspective-taking paradigm, participants judge how many objects they see; whereas, in the gaze cueing paradigm, participants identify a target. Unlike in the former task, in the latter task, the influence of what or where the other person is looking at is only observed when the other person is presented alone before the task-relevant objects. We show that this discrepancy across the two paradigms is not due to differences in visual settings (Experiment 1) or available time to extract the directional information (Experiment 2), but that it is caused by how attention is deployed in response to task instructions (Experiment 3). Thus, the mere presence of another person in the field of view is not sufficient to compute where/what that person is looking at, which qualifies the claimed automaticity of such computations.  相似文献   
10.
The present study investigated whether facial expressions modulate visual attention in 7-month-old infants. First, infants' looking duration to individually presented fearful, happy, and novel facial expressions was compared to looking duration to a control stimulus (scrambled face). The face with a novel expression was included to examine the hypothesis that the earlier findings of greater allocation of attention to fearful as compared to happy faces could be due to the novelty of fearful faces in infants' rearing environment. The infants looked longer at the fearful face than at the control stimulus, whereas no such difference was found between the other expressions and the control stimulus. Second, a gap/overlap paradigm was used to determine whether facial expressions affect the infants' ability to disengage their fixation from a centrally presented face and shift attention to a peripheral target. It was found that infants disengaged their fixation significantly less frequently from fearful faces than from control stimuli and happy faces. Novel facial expressions did not have a similar effect on attention disengagement. Thus, it seems that adult-like modulation of the disengagement of attention by threat-related stimuli can be observed early in life, and that the influence of emotionally salient (fearful) faces on visual attention is not simply attributable to the novelty of these expressions in infants' rearing environment.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号