首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   508篇
  免费   49篇
  国内免费   75篇
  2023年   11篇
  2022年   14篇
  2021年   19篇
  2020年   46篇
  2019年   41篇
  2018年   47篇
  2017年   43篇
  2016年   20篇
  2015年   29篇
  2014年   23篇
  2013年   91篇
  2012年   16篇
  2011年   22篇
  2010年   19篇
  2009年   18篇
  2008年   11篇
  2007年   24篇
  2006年   25篇
  2005年   10篇
  2004年   7篇
  2003年   10篇
  2002年   10篇
  2001年   6篇
  2000年   4篇
  1999年   3篇
  1998年   6篇
  1997年   8篇
  1996年   3篇
  1995年   3篇
  1994年   2篇
  1993年   2篇
  1992年   3篇
  1991年   3篇
  1990年   5篇
  1989年   2篇
  1987年   1篇
  1986年   2篇
  1985年   3篇
  1984年   1篇
  1983年   8篇
  1982年   1篇
  1978年   3篇
  1977年   4篇
  1976年   2篇
  1975年   1篇
排序方式: 共有632条查询结果,搜索用时 312 毫秒
131.
We explored conceptions of love from the perspective of Ghanaian Christians. Using an ethnographic approach, we interviewed 61 participants (males = 39; females = 22; age range 20 to 70) on their understanding and experiences of love in the context of family. Thematic analysis of the interview data revealed understandings of love expression in meeting material needs (of children, spouse, parents, and close relatives), helping other people in need (including the elderly, friends, and strangers), and affectionate care. Communal and maintenance-oriented love appears to characterise love expression among Ghanaian Christians.  相似文献   
132.
Speech and song are universal forms of vocalization that may share aspects of emotional expression. Research has focused on parallels in acoustic features, overlooking facial cues to emotion. In three experiments, we compared moving facial expressions in speech and song. In Experiment 1, vocalists spoke and sang statements each with five emotions. Vocalists exhibited emotion-dependent movements of the eyebrows and lip corners that transcended speech–song differences. Vocalists’ jaw movements were coupled to their acoustic intensity, exhibiting differences across emotion and speech–song. Vocalists’ emotional movements extended beyond vocal sound to include large sustained expressions, suggesting a communicative function. In Experiment 2, viewers judged silent videos of vocalists’ facial expressions prior to, during, and following vocalization. Emotional intentions were identified accurately for movements during and after vocalization, suggesting that these movements support the acoustic message. Experiment 3 compared emotional identification in voice-only, face-only, and face-and-voice recordings. Emotion judgements for voice-only singing were poorly identified, yet were accurate for all other conditions, confirming that facial expressions conveyed emotion more accurately than the voice in song, yet were equivalent in speech. Collectively, these findings highlight broad commonalities in the facial cues to emotion in speech and song, yet highlight differences in perception and acoustic-motor production.  相似文献   
133.
This study examined if subcortical stroke was associated with impaired facial emotion recognition. Furthermore, the lateralization of the impairment and the differential profiles of facial emotion recognition deficits with localized thalamic or basal ganglia damage were also studied. Thirty-eight patients with subcortical strokes and 19 matched normal controls volunteered to participate. The participants were individually presented with morphed photographs of facial emotion expressions over multiple trials. They were requested to classify each of these morphed photographs according to Ekman's six basic emotion categories. The findings indicated that the clinical participants had impaired facial emotion recognition, though no clear lateralization pattern of impairment was observed. The patients with localized thalamic damage performed significantly worse in recognizing sadness than the controls. Longitudinal studies on patients with subcortical brain damage should be conducted to examine how cognitive reorganization post-stroke would affect emotion recognition.  相似文献   
134.
This longitudinal study investigates the relation between recall memory and communication in infancy and later cognitive development. Twenty-six typically developing Swedish children were tested during infancy for deferred imitation (memory), joint attention (JA), and requesting (nonverbal communication); they also were tested during childhood for language and cognitive competence. Results showed that infants with low performance on both deferred imitation at 9 months and joint attention at 14 months obtained a significantly lower score on a test of cognitive abilities at 4 years of age. This long-term prediction from preverbal infancy to childhood cognition is of interest both to developmental theory and to practice.  相似文献   
135.
Infants first generalize across contexts and cues at 3 months of age in operant tasks but not until 12 months of age in imitation tasks. Three experiments using an imitation task examined whether infants younger than 12 months of age might generalize imitation if conditions were more like those in operant studies. Infants sat on a distinctive mat in a room in their home (the context) while an adult modeled actions on a hand puppet (the cue). When they were tested 24 h later, 6-month-olds generalized imitation when either the mat or the room (but not both) differed, whereas 9-month-olds generalized when both the mat and the room differed. In addition, 9-month-olds who imitated immediately also generalized to a novel test cue, whereas 6-month-olds did not. These results parallel results from operant studies and reveal that the similarity between the conditions of encoding and retrieval-not the type of task-determines whether infants generalize. The findings offer further evidence that memory development during infancy is a continuous function.  相似文献   
136.
Several convergent lines of evidence have suggested that the presence of an emotion signal in a visual stimulus can influence processing of that stimulus. In the current study, we picked up on this idea, and explored the hypothesis that the presence of an emotional facial expression (happiness) would facilitate the identification of familiar faces. We studied two groups of normal participants (overall N=54), and neurological patients with either left (n=8) or right (n=10) temporal lobectomies. Reaction times were measured while participants named familiar famous faces that had happy expressions or neutral expressions. In support of the hypothesis, naming was significantly faster for the happy faces, and this effect obtained in the normal participants and in both patient groups. In the patients with left temporal lobectomies, the effect size for this facilitation was large (d=0.87), suggesting that this manipulation might have practical implications for helping such patients compensate for the types of naming defects that often accompany their brain damage. Consistent with other recent work, our findings indicate that emotion can facilitate visual identification, perhaps via a modulatory influence of the amygdala on extrastriate cortex.  相似文献   
137.
This study examined whether sufficient-response-exemplar training of vocal imitation would result in improved articulation in children with phonological disorder, and whether improved articulation established in the context of vocal imitation would transfer to other verbal classes such as object naming and conversational speech. Participant 1 was 6 years old and attended first grade in a regular public school. Participant 2 was 5 years 4 months old and attended a public kindergarten. Both participants had normal hearing and no additional handicaps. A multiple baseline design across behaviors (target sounds or blends) was employed to examine whether the vocal imitation training resulted in improved articulation. Results showed that both participants improved articulation once training was implemented, and that the improved articulation transferred from vocal imitation to more natural speech such as object naming and conversational speech. Improvement established during training was maintained posttraining and at a 6-month follow-up.  相似文献   
138.
Processing Faces and Facial Expressions   总被引:10,自引:0,他引:10  
This paper reviews processing of facial identity and expressions. The issue of independence of these two systems for these tasks has been addressed from different approaches over the past 25 years. More recently, neuroimaging techniques have provided researchers with new tools to investigate how facial information is processed in the brain. First, findings from traditional approaches to identity and expression processing are summarized. The review then covers findings from neuroimaging studies on face perception, recognition, and encoding. Processing of the basic facial expressions is detailed in light of behavioral and neuroimaging data. Whereas data from experimental and neuropsychological studies support the existence of two systems, the neuroimaging literature yields a less clear picture because it shows considerable overlap in activation patterns in response to the different face-processing tasks. Further, activation patterns in response to facial expressions support the notion of involved neural substrates for processing different facial expressions.  相似文献   
139.
The hypotheses of this investigation were based on attachment theory and Bowlby's conception of "internal working models", supposed to consist of one mainly emotional (model-of-self) and one more conscious cognitive structure (model-of-others), which are assumed to operate at different temporal stages of information processing. Facial muscle reactions in individuals with positive versus negative internal working models were compared at different stages of information processing. The Relationship Scale Questionnaire (RSQ) was used to categorize subjects into positive or negative model-of-self and model-of-others and the State-Trait Anxiety Inventory was used to measure trait anxiety (STAI-T). Pictures of happy and angry faces followed by backward masking stimuli were exposed to 61 subjects at three different exposure times (17 ms, 56 ms, 2,350 ms) in order to elicit reactions first at an automatic level and then consecutively at more cognitively elaborated levels. Facial muscle reactions were recorded by electromyography (EMG), a higher corrugator activity representing more negative emotions and a higher zygomaticus activity more positive emotions. In line with the hypothesis, subjects with a negative model-of-self scored significantly higher on STAI-T than subjects with a positive model-of-self. They also showed an overall stronger corrugator than zygomatic activity, giving further evidence of a negative tonic affective state. At the longest exposure time (2,350 ms), representing emotionally regulated responses, negative model-of-self subjects showed a significantly stronger corrugator response and reported more negative feelings than subjects with a positive model-of-self. These results supported the hypothesis that subjects with a negative model-of-self would show difficulties in self-regulation of negative affect. In line with expectations, model-of-others, assumed to represent mainly knowledge structures, did not interact with the physiological emotional measures employed, facial muscle reactions or tonic affective state.  相似文献   
140.
表情判别能力的发展特点与影响因素   总被引:1,自引:0,他引:1  
乔建中 《心理科学》1998,21(1):52-56
本实验通过不同年龄学生对各类表情的判别之研究,探讨了表情判别能力的发展特点及其与个体情绪发展间的关系。结果表明:①身段表情判别能力是导致表情判别年龄差异的核心因素;②面部表情判别能力发展较早,在小学阶段已相当完善,身段表情判别能力发展较晚,到大学阶段才达到与前者相同的水平;③身段表情判别能力的发展与个体情绪发展的阶段性特点及其主要社会适应问题相关联。  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号