全文获取类型
收费全文 | 557篇 |
免费 | 61篇 |
国内免费 | 32篇 |
专业分类
650篇 |
出版年
2025年 | 2篇 |
2024年 | 9篇 |
2023年 | 13篇 |
2022年 | 14篇 |
2021年 | 19篇 |
2020年 | 47篇 |
2019年 | 42篇 |
2018年 | 48篇 |
2017年 | 43篇 |
2016年 | 20篇 |
2015年 | 30篇 |
2014年 | 23篇 |
2013年 | 91篇 |
2012年 | 17篇 |
2011年 | 22篇 |
2010年 | 19篇 |
2009年 | 18篇 |
2008年 | 11篇 |
2007年 | 24篇 |
2006年 | 25篇 |
2005年 | 10篇 |
2004年 | 7篇 |
2003年 | 10篇 |
2002年 | 10篇 |
2001年 | 6篇 |
2000年 | 4篇 |
1999年 | 3篇 |
1998年 | 6篇 |
1997年 | 8篇 |
1996年 | 3篇 |
1995年 | 3篇 |
1994年 | 2篇 |
1993年 | 2篇 |
1992年 | 3篇 |
1991年 | 3篇 |
1990年 | 5篇 |
1989年 | 2篇 |
1987年 | 1篇 |
1986年 | 2篇 |
1985年 | 3篇 |
1984年 | 1篇 |
1983年 | 8篇 |
1982年 | 1篇 |
1978年 | 3篇 |
1977年 | 4篇 |
1976年 | 2篇 |
1975年 | 1篇 |
排序方式: 共有650条查询结果,搜索用时 0 毫秒
131.
Emotional tears tend to increase perceived sadness in facial expressions. However, it is unclear whether tears would still be seen as an indicator of sadness when a tearful face is observed in an emotional context (e.g., a touching moment during a wedding ceremony). We examine the influence of context on the sadness enhancement effect of tears in three studies. In Study 1, participants evaluated tearful or tearless expressions presented without body postures, with emotionally neutral postures, or with emotionally congruent postures (i.e., postures indicating the same emotion as the face). The results show that the presence of tears increases the perceived sadness of faces regardless of context. Similar results are found in Studies 2 and 3, which used visual scenes and written scenarios as contexts, respectively. Our findings demonstrate that tears on faces reliably indicate sadness, even in the presence of contextual information that suggests non-sadness emotions. 相似文献
132.
Mingming Zhang Ping Li Lu Yu Jie Ren Shuxin Jia Chaolun Wang Weiqi He Wenbo Luo 《PsyCh Journal》2023,12(2):178-184
In daily life, individuals need to recognize and update emotional information from others' changing body expressions. However, whether emotional bodies can enhance working memory (WM) remains unknown. In the present study, participants completed a modified n-back task, in which they were required to indicate whether a presented image of an emotional body matched that of an item displayed before each block (0-back) or two positions previously (2-back). Each block comprised only fear, happiness, or neutral. We found that in the 0-back trials, when compared with neutral body expressions, the participants took less time and showed comparable ceiling effects for accuracy in happy bodies followed by fearful bodies. When WM load increased to 2-back, both fearful and happy bodies significantly facilitated WM performance (i.e., faster reaction time and higher accuracy) relative to neutral conditions. In summary, the current findings reveal the enhancement effect of emotional body expressions on WM and highlight the importance of emotional action information in WM. 相似文献
133.
Sarah Laurence Kristen A. Baker Valentina M. Proietti Catherine J. Mondloch 《British journal of psychology (London, England : 1953)》2022,113(3):677-695
Matching identity in images of unfamiliar faces is error prone, but we can easily recognize highly variable images of familiar faces – even images taken decades apart. Recent theoretical development based on computational modelling can account for how we recognize extremely variable instances of the same identity. We provide complementary behavioural data by examining older adults’ representation of older celebrities who were also famous when young. In Experiment 1, participants completed a long-lag repetition priming task in which primes and test stimuli were the same age or different ages. In Experiment 2, participants completed an identity after effects task in which the adapting stimulus was an older or young photograph of one celebrity and the test stimulus was a morph between the adapting identity and a different celebrity; the adapting stimulus was the same age as the test stimulus on some trials (e.g., both old) or a different age (e.g., adapter young, test stimulus old). The magnitude of priming and identity after effects were not influenced by whether the prime and adapting stimulus were the same age or different age as the test face. Collectively, our findings suggest that humans have one common mental representation for a familiar face (e.g., Paul McCartney) that incorporates visual changes across decades, rather than multiple age-specific representations. These findings make novel predictions for state-of-the-art algorithms (e.g., Deep Convolutional Neural Networks). 相似文献
134.
《European Journal of Developmental Psychology》2013,10(6):705-721
The present longitudinal and naturalistic study aimed to investigate fathers' and infants' facial expressions of emotions during paternal infant-directed speech. The microanalysis of infant and paternal facial expressions of emotion in the course of the naturalistic interactions of 11 infant – father dyads, from the 2nd to the 6th month, provided evidence that: (a) fathers and infants match their emotional states and attune their emotional intensity; (b) infants seem to match paternal facial emotional expressions more than vice versa; (c) the prevailing emotional states of each partner remain constant in the beginning and at the end of speech; and (d) the developmental trajectories of infant interest and paternal pleasure change significantly across the age range of 2 – 6 months and they seem to follow similar courses. These results are interpreted within the frame of the theory of innate intersubjectivity. 相似文献
135.
《Quarterly journal of experimental psychology (2006)》2013,66(5):952-970
Speech and song are universal forms of vocalization that may share aspects of emotional expression. Research has focused on parallels in acoustic features, overlooking facial cues to emotion. In three experiments, we compared moving facial expressions in speech and song. In Experiment 1, vocalists spoke and sang statements each with five emotions. Vocalists exhibited emotion-dependent movements of the eyebrows and lip corners that transcended speech–song differences. Vocalists’ jaw movements were coupled to their acoustic intensity, exhibiting differences across emotion and speech–song. Vocalists’ emotional movements extended beyond vocal sound to include large sustained expressions, suggesting a communicative function. In Experiment 2, viewers judged silent videos of vocalists’ facial expressions prior to, during, and following vocalization. Emotional intentions were identified accurately for movements during and after vocalization, suggesting that these movements support the acoustic message. Experiment 3 compared emotional identification in voice-only, face-only, and face-and-voice recordings. Emotion judgements for voice-only singing were poorly identified, yet were accurate for all other conditions, confirming that facial expressions conveyed emotion more accurately than the voice in song, yet were equivalent in speech. Collectively, these findings highlight broad commonalities in the facial cues to emotion in speech and song, yet highlight differences in perception and acoustic-motor production. 相似文献
136.
Josh P. Davis Sarah Thorniley Stuart Gibson Chris Solomon 《The Journal of psychology》2016,150(1):102-118
When the police have no suspect, they may ask an eyewitness to construct a facial composite of that suspect from memory. Faces are primarily processed holistically, and recently developed computerized holistic facial composite systems (e.g., EFIT-V) have been designed to match these processes. The reported research compared children aged 6–11 years with adults on their ability to construct a recognizable EFIT-V composite. Adult constructor's EFIT-Vs received significantly higher composite-suspect likeness ratings from assessors than children's, although there were some notable exceptions. In comparison to adults, the child constructors also overestimated the composite-suspect likeness of their own EFIT-Vs. In a second phase, there were no differences between adult controls and constructors in correct identification rates from video lineups. However, correct suspect identification rates by child constructors were lower than those of child controls, suggesting that a child's memory for the suspect can be adversely influenced by composite construction. Nevertheless, all child constructors coped with the demands of the EFIT-V system, and the implications for research, theory, and the criminal justice system practice are discussed. 相似文献
137.
A human stigmergy model of product development through a series of incremental novelties and imitations is obtained by observing the isomorphism between solution trees generated by groups of humans participating in online games like Foldit, self-organizing topics in online learning systems, product evolution through a combination of innovation and imitation in the cell phone industry, and social networks formed over the past 40 years in creation of the Boston biotech commons. The model incorporates two very simple rules: (1) preferential attachment, and (2) the combination of existing product designs into a single design. A computer model used to simulate the model produces product evolution networks isomorphic to observed solution trees and product evolution trees. Additionally, industries with a high degree of novelty produce a product evolution network with scale-free and betweenness centrality structure. 相似文献
138.
Donald F. Sacco John Paul Wilson Kurt Hugenberg James H. Wirth 《The Journal of social psychology》2014,154(4):273-277
We tested the hypothesis that exposure to babyish faces can serve a social surrogacy function, such that even limited exposure to babyish faces can fulfill social belongingness needs. We manipulated the sex and facial maturity of a target face seen in an imagined social interaction, on a between-participants basis. Regardless of target sex, individuals indicated greater satisfaction of social belongingness needs following an imagined interaction with a babyish face, compared to a mature adult face. These results indicate that brief exposure to babyish (relative to mature) faces, even without an extensive interaction, can lead to the satisfaction of social belongingness needs. 相似文献
139.
Anthony C. Little Christine A. Caldwell Benedict C. Jones Lisa M. DeBruine 《British journal of psychology (London, England : 1953)》2015,106(3):397-413
Being paired with an attractive partner increases perceptual judgements of attractiveness in humans. We tested experimentally for prestige bias, whereby individuals follow the choices of prestigious others. Women rated the attractiveness of photographs of target males which were paired with either popular or less popular model female partners. We found that pairing a photo of a man with a woman presented as his partner positively influenced the attractiveness of the man when the woman was presented as more popular (Experiment 1). Further, this effect was stronger in younger participants compared to older participants (Experiment 1). Reversing the target and model such that women were asked to rate women paired with popular and less popular men revealed no effect of model popularity and this effect was unrelated to participant age (Experiment 2). An additional experiment confirmed that participant age and not stimulus age primarily influenced the tendency to follow others' preferences in Experiment 1 (Experiment 3). We also confirmed that our manipulations of popularity lead to variation in rated prestige (Experiment 4). These results suggest a sophisticated model‐based bias in social learning whereby individuals are most influenced by the choices of those who have high popularity/prestige. Furthermore, older individuals moderate their use of such social information and so this form of social learning appears strongest in younger women. 相似文献
140.