全文获取类型
收费全文 | 253篇 |
免费 | 11篇 |
国内免费 | 7篇 |
专业分类
271篇 |
出版年
2024年 | 4篇 |
2023年 | 5篇 |
2022年 | 4篇 |
2021年 | 5篇 |
2020年 | 15篇 |
2019年 | 19篇 |
2018年 | 28篇 |
2017年 | 23篇 |
2016年 | 2篇 |
2015年 | 10篇 |
2014年 | 6篇 |
2013年 | 59篇 |
2012年 | 6篇 |
2011年 | 3篇 |
2010年 | 11篇 |
2009年 | 9篇 |
2008年 | 4篇 |
2007年 | 14篇 |
2006年 | 6篇 |
2005年 | 6篇 |
2004年 | 1篇 |
2003年 | 6篇 |
2002年 | 2篇 |
2001年 | 3篇 |
2000年 | 2篇 |
1999年 | 1篇 |
1998年 | 3篇 |
1997年 | 3篇 |
1996年 | 2篇 |
1994年 | 1篇 |
1993年 | 2篇 |
1990年 | 2篇 |
1989年 | 1篇 |
1986年 | 1篇 |
1985年 | 1篇 |
1983年 | 1篇 |
排序方式: 共有271条查询结果,搜索用时 0 毫秒
171.
Mark D. Vida 《Journal of experimental child psychology》2009,104(3):326-345
This investigation used adaptation aftereffects to examine developmental changes in the perception of facial expressions. Previous studies have shown that adults’ perceptions of ambiguous facial expressions are biased following adaptation to intense expressions. These expression aftereffects are strong when the adapting and probe expressions share the same facial identity but are mitigated when they are posed by different identities. We extended these findings by comparing expression aftereffects and categorical boundaries in adults versus 5- to 9-year-olds (n = 20/group). Children displayed adult-like aftereffects and categorical boundaries for happy/sad by 7 years of age and for fear/anger by 9 years of age. These findings suggest that both children and adults perceive expressions according to malleable dimensions in which representations of facial expression are partially integrated with facial identity. 相似文献
172.
This study examined the perception of emotional expressions, focusing on the face and the body. Photographs of four actors expressing happiness, sadness, anger, and fear were presented in congruent (e.g., happy face with happy body) and incongruent (e.g., happy face with fearful body) combinations. Participants selected an emotional label using a four-option categorisation task. Reaction times and accuracy for the categorisation judgement, and eye movements were the dependent variables. Two regions of interest were examined: face and body. Results showed better accuracy and faster reaction times for congruent images compared to incongruent images. Eye movements showed an interaction in which there were more fixations and longer dwell times to the face and fewer fixations and shorter dwell times to the body with incongruent images. Thus, conflicting information produced a marked effect on information processing in which participants focused to a greater extent on the face compared to the body. 相似文献
173.
Recognizing facial expressions is crucial for adaptive social interaction. Prior empirical research on facial expression processing has primarily focused on isolated faces; however, facial expressions appear embedded in surrounding scenes in everyday life. In this study, we attempted to demonstrate how the online car-hailing scene affects the processing of facial expression. This study examined the processing of drivers' facial expressions in scenes by recording event-related potentials, in which neutral or happy faces embedded in online car-hailing orders were constructed (with type of vehicle, driver rating, driver surname, and level of reputation controlled). A total of 35 female volunteers participated in this experiment and were asked to judge which facial expressions that emerged in scenes of online car-hailing were more trustworthy. The results revealed an interaction between facial expression scenes, brain areas, and electrode sites in the late positive potential, which indicated that happy faces elicited larger amplitudes than did neutral ones in the parietal areas and that scenes with happy facial expressions had shorter latencies than did those with neutral ones. As expected, the late positive potential evoked by happy facial expressions in a scene was larger than that evoked by neutral ones, which reflected motivated attention and motivational response processes. This study highlights the importance of scenes as context in the study of facial expression processing. 相似文献
174.
Catia Correia-Caeiro Abbey Lawrence Abdelhady Abdelrahman Kun Guo Daniel Mills 《Developmental science》2023,26(3):e13332
Children are often surrounded by other humans and companion animals (e.g., dogs, cats); and understanding facial expressions in all these social partners may be critical to successful social interactions. In an eye-tracking study, we examined how children (4–10 years old) view and label facial expressions in adult humans and dogs. We found that children looked more at dogs than humans, and more at negative than positive or neutral human expressions. Their viewing patterns (Proportion of Viewing Time, PVT) at individual facial regions were also modified by the viewed species and emotion, with the eyes not always being most viewed: this related to positive anticipation when viewing humans, whilst when viewing dogs, the mouth was viewed more or equally compared to the eyes for all emotions. We further found that children's labelling (Emotion Categorisation Accuracy, ECA) was better for the perceived valence than for emotion category, with positive human expressions easier than both positive and negative dog expressions. They performed poorly when asked to freely label facial expressions, but performed better for human than dog expressions. Finally, we found some effects of age, sex, and other factors (e.g., experience with dogs) on both PVT and ECA. Our study shows that children have a different gaze pattern and identification accuracy compared to adults, for viewing faces of human adults and dogs. We suggest that for recognising human (own-face-type) expressions, familiarity obtained through casual social interactions may be sufficient; but for recognising dog (other-face-type) expressions, explicit training may be required to develop competence.
Highlights
- We conducted an eye-tracking experiment to investigate how children view and categorise facial expressions in adult humans and dogs
- Children's viewing patterns were significantly dependent upon the facial region, species, and emotion viewed
- Children's categorisation also varied with the species and emotion viewed, with better performance for valence than emotion categories
- Own-face-types (adult humans) are easier than other-face-types (dogs) for children, and casual familiarity (e.g., through family dogs) to the latter is not enough to achieve perceptual competence
175.
Domagoj Švegar Nadalia Fiamengo Marija Grundler Igor Kardum 《International journal of psychology》2018,53(1):49-57
The goal of this research was to examine the effects of facial expressions on the speed of sex recognition. Prior research revealed that sex recognition of female angry faces was slower compared with male angry faces and that female happy faces are recognized faster than male happy faces. We aimed to replicate and extend the previous research by using different set of facial stimuli, different methodological approach and also by examining the effects of some other previously unexplored expressions (such as crying) on the speed of sex recognition. In the first experiment, we presented facial stimuli of men and women displaying anger, fear, happiness, sadness, crying and three control conditions expressing no emotion. Results showed that sex recognition of angry females was significantly slower compared with sex recognition in any other condition, while sad, crying, happy, frightened and neutral expressions did not impact the speed of sex recognition. In the second experiment, we presented angry, neutral and crying expressions in blocks and again only sex recognition of female angry expressions was slower compared with all other expressions. The results are discussed in a context of perceptive features of male and female facial configuration, evolutionary theory and social learning context. 相似文献
176.
Michal Olszanowski Olga Katarzyna Kaminska Piotr Winkielman 《Cognition & emotion》2018,32(5):1032-1051
Facial features that resemble emotional expressions influence key social evaluations, including trust. Here, we present four experiments testing how the impact of such expressive features is qualified by their processing difficulty. We show that faces with mixed expressive features are relatively devalued, and faces with pure expressive features are relatively valued. This is especially true when participants first engage in a categorisation task that makes processing of mixed expressions difficult and pure expressions easy. Critically, we also demonstrate that the impact of categorisation fluency depends on the specific nature of the expressive features. When faces vary on valence (i.e. sad to happy), trust judgments increase with their positivity, but also depend on fluency. When faces vary on social motivation (i.e. angry to sad), trust judgments increase with their approachability, but remain impervious to disfluency. This suggests that people intelligently use fluency to make judgments on valence-relevant judgment dimensions – but not when faces can be judged using other relevant criteria, such as motivation. Overall, the findings highlight that key social impressions (like trust) are flexibly constructed from inputs related to stimulus features and processing experience. 相似文献
177.
Close relationship partners often respond to happiness expressed through smiles with capitalization, i.e. they join in attempting to up-regulate and prolong the individual’s positive emotion, and they often respond to crying with interpersonal down-regulation of negative emotions, attempting to dampen the negative emotions. We investigated how people responded when happiness was expressed through tears, an expression termed dimorphous. We hypothesised that the physical expression of crying would prompt interpersonal down-regulation of emotion when the onlooker perceived that the expresser was experiencing negative or positive emotions. When participants were asked how they would behave when faced with smiles of joy, we expected capitalization responses, and when faced with tears of joy, we expected down-regulation responses. In six experimental studies using video and photographic stimuli, we found support for our hypotheses. Throughout our investigations we test and discuss boundaries of and possible mechanisms for such responsiveness. 相似文献
178.
Calls to communicate uncertainty using mixed, verbal‐numerical formats (‘unlikely [0–33%]’) have stemmed from research comparing mixed with solely verbal communications. Research using the new ‘which outcome’ approach to investigate understanding of verbal probability expressions suggests, however, that mixed formats might convey disadvantages compared with purely numerical communications. When asked to indicate an outcome that is ‘unlikely’, participants have been shown to often indicate outcomes with a value exceeding the maximum value shown, equivalent to a 0% probability —an ‘extremity effect’. Recognising the potential consequences of communication recipients expecting an ‘unlikely’ event to never occur, we extend the ‘which outcome’ work across four experiments, using verbal, numerical, and verbal‐numerical communication formats, as well as a previously unconsidered numerical‐verbal format. We examine how robust the effect is in the context of consequential outcomes and over non‐normal distributions. We also investigate whether participants are aware of the inconsistency in their responses from a traditional ‘how likely’ and ‘which outcome’ task. We replicate and extend previous findings, with preference for extreme outcomes (including above maximum values) observed in both verbal and verbal‐numerical formats. Our results suggest caution in blanket usage of recently recommended verbal‐numerical formats for the communication of uncertainty. Copyright © 2018 John Wiley & Sons, Ltd. 相似文献
179.
Stephen Gadsby 《Philosophical Psychology》2018,31(4):629-637
There are two ways in which we are aware of our bodies: reflectively, when we attend to them, and pre-reflectively, a kind of marginal awareness that pervades regular experience. However, there is an inherent issue with studying bodily awareness of the pre-reflective kind: given that it is, by definition, non-observational, how can we observe it? Kuhle claims to have found a way around this problem—we can study it indirectly by investigating an aspect of reflective bodily awareness: the sense of bodily ownership. Unfortunately, I argue, there is little reason to believe a relationship between pre-reflective bodily awareness and the sense of bodily ownership exists. Until more work is done, pre-reflective bodily awareness remains beyond our empirical grasp. 相似文献
180.