首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   551篇
  免费   36篇
  国内免费   73篇
  2023年   10篇
  2022年   12篇
  2021年   25篇
  2020年   37篇
  2019年   50篇
  2018年   49篇
  2017年   44篇
  2016年   25篇
  2015年   31篇
  2014年   23篇
  2013年   105篇
  2012年   18篇
  2011年   21篇
  2010年   16篇
  2009年   32篇
  2008年   25篇
  2007年   25篇
  2006年   20篇
  2005年   17篇
  2004年   7篇
  2003年   9篇
  2002年   9篇
  2001年   4篇
  2000年   5篇
  1999年   1篇
  1998年   3篇
  1997年   7篇
  1996年   3篇
  1995年   3篇
  1994年   3篇
  1993年   2篇
  1992年   2篇
  1991年   3篇
  1990年   3篇
  1987年   1篇
  1986年   2篇
  1985年   3篇
  1984年   1篇
  1983年   3篇
  1977年   1篇
排序方式: 共有660条查询结果,搜索用时 0 毫秒
191.
This study examined the contribution of social anxiety to the evaluation of emotional facial stimuli, while controlling for the gender of participants and stimuli. Participants (n=63) completed two tasks: a single face evaluation task in which they had to evaluate angry versus neutral faces and, a facial crowd evaluation task in which they had to evaluate displays with a varying number of neutral and angry faces. In each task, participants had to evaluate the stimuli with respect to (a) the degree of disapproval expressed by the single face/crowd, and (b) the perceived difficulty of interacting with the face/crowd (emotional cost). Consistent with earlier studies, results showed that social anxiety modulated the evaluation of single faces for emotional cost, but not for disapproval ratings. In contrast, the evaluation of facial crowds was modulated by social anxiety on both ratings.  相似文献   
192.
The current study investigated the effects of presentation time and fixation to expression-specific diagnostic features on emotion discrimination performance, in a backward masking task. While no differences were found when stimuli were presented for 16.67 ms, differences between facial emotions emerged beyond the happy-superiority effect at presentation times as early as 50 ms. Happy expressions were best discriminated, followed by neutral and disgusted, then surprised, and finally fearful expressions presented for 50 and 100 ms. While performance was not improved by the use of expression-specific diagnostic facial features, performance increased with presentation time for all emotions. Results support the idea of an integration of facial features (holistic processing) varying as a function of emotion and presentation time.  相似文献   
193.
Past literature has indicated that face inversion either attenuates emotion detection advantages in visual search, implying that detection of emotional expressions requires holistic face processing, or has no effect, implying that expression detection is feature based. Across six experiments that utilised different task designs, ranging from simple (single poser, single set size) to complex (multiple posers, multiple set sizes), and stimuli drawn from different databases, significant emotion detection advantages were found for both upright and inverted faces. Consistent with past research, the nature of the expression detection advantage, anger superiority (Experiments 1, 2 and 6) or happiness superiority (Experiments 3, 4 and 5), differed across stimulus sets. However both patterns were evident for upright and inverted faces. These results indicate that face inversion does not interfere with visual search for emotional expressions, and suggest that expression detection in visual search may rely on feature-based mechanisms.  相似文献   
194.
Previous meta-analyses support a female advantage in decoding non-verbal emotion (Hall, 1978, 1984), yet the mechanisms underlying this advantage are not understood. The present study examined whether the female advantage is related to greater female attention to the eyes. Eye-tracking techniques were used to measure attention to the eyes in 19 males and 20 females during a facial expression recognition task. Women were faster and more accurate in their expression recognition compared with men, and women looked more at the eyes than men. Positive relationships were observed between dwell time and number of fixations to the eyes and both accuracy of facial expression recognition and speed of facial expression recognition. These results support the hypothesis that the female advantage in facial expression recognition is related to greater female attention to the eyes.  相似文献   
195.
We investigated the effects of smiling on perceptions of positive, neutral and negative verbal statements. Participants viewed computer-generated movies of female characters who made angry, disgusted, happy or neutral statements and then showed either one of two temporal forms of smile (slow vs. fast onset) or a neutral expression. Smiles significantly increased the perceived positivity of the message by making negative statements appear less negative and neutral statements appear more positive. However, these smiles led the character to be seen as less genuine than when she showed a neutral expression. Disgust + smile messages led to higher judged happiness than did anger + smile messages, suggesting that smiles were seen as reflecting humour when combined with disgust statements, but as masking negative affect when combined with anger statements. These findings provide insights into the ways that smiles moderate the impact of verbal statements.  相似文献   
196.
Event-related brain potentials (ERPs) were recorded to assess the processing time course of ambiguous facial expressions with a smiling mouth but neutral, fearful, or angry eyes, in comparison with genuinely happy faces (a smile and happy eyes) and non-happy faces (neutral, fearful, or angry mouth and eyes). Participants judged whether the faces looked truly happy or not. Electroencephalographic recordings were made from 64 scalp electrodes to generate ERPs. The neural activation patterns showed early P200 sensitivity (differences between negative and positive or neutral expressions) and EPN sensitivity (differences between positive and neutral expressions) to emotional valence. In contrast, sensitivity to ambiguity (differences between genuine and ambiguous expressions) emerged only in later LPP components. Discrimination of emotional vs. neutral affect occurs between 180 and 430 ms from stimulus onset, whereas the detection and resolution of ambiguity takes place between 470 and 720 ms. In addition, while blended expressions involving a smile with angry eyes can be identified as not happy in the P200 (175–240 ms) component, smiles with fearful or neutral eyes produce the same ERP pattern as genuinely happy faces, thus revealing poor discrimination.  相似文献   
197.
Looking is a fundamental exploratory behavior by which infants acquire knowledge about the world. In theories of infant habituation, however, looking as an exploratory behavior has been deemphasized relative to the reliable nature with which looking indexes active cognitive processing. We present a new theory that connects looking to the dynamics of memory formation and formally implement this theory in a Dynamic Neural Field model that learns autonomously as it actively looks and looks away from a stimulus. We situate this model in a habituation task and illustrate the mechanisms by which looking, encoding, working memory formation, and long‐term memory formation give rise to habituation across multiple stimulus and task contexts. We also illustrate how the act of looking and the temporal dynamics of learning affect each other. Finally, we test a new hypothesis about the sources of developmental differences in looking.  相似文献   
198.
Recognition memory is better for moving images than for static images (the dynamic superiority effect), and performance is best when the mode of presentation at test matches that at study (the study–test congruence effect). We investigated the basis for these effects. In Experiment 1, dividing attention during encoding reduced overall performance but had little effect on the dynamic superiority or study–test congruence effects. In addition, these effects were not limited to scenes depicting faces. In Experiment 2, movement improved both old–new recognition and scene orientation judgements. In Experiment 3, movement improved the recognition of studied scenes but also increased the spurious recognition of novel scenes depicting the same people as studied scenes, suggesting that movement increases the identification of individual objects or actors without necessarily improving the retrieval of associated information. We discuss the theoretical implications of these results and highlight directions for future investigation.  相似文献   
199.
Rachael E. Jack 《Visual cognition》2013,21(9-10):1248-1286
With over a century of theoretical developments and empirical investigation in broad fields (e.g., anthropology, psychology, evolutionary biology), the universality of facial expressions of emotion remains a central debate in psychology. How near or far, then, is this debate from being resolved? Here, I will address this question by highlighting and synthesizing the significant advances in the field that have elevated knowledge of facial expression recognition across cultures. Specifically, I will discuss the impact of early major theoretical and empirical contributions in parallel fields and their later integration in modern research. With illustrative examples, I will show that the debate on the universality of facial expressions has arrived at a new juncture and faces a new generation of exciting questions.  相似文献   
200.
Face adaptation has been used as a tool to probe our representations for facial identity. It has also been claimed to play a functional role in face processing, perhaps calibrating the visual system towards encountered faces. However, for this to be so, face aftereffects must be observable following adaptation to ecologically valid moving stimuli, not just after prolonged viewing of static images. We adapted our participants to videos, static image sequences or single images of the faces of lecturers who were personally familiar to them. All three stimulus types produced significant, and equivalent, face identity aftereffects, demonstrating that aftereffects are not confined to static images but can occur after exposure to more naturalistic stimuli. It is also further evidence against explanations of face adaptation effects solely in terms of low-level visual processing.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号