首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   270篇
  免费   8篇
  国内免费   4篇
  282篇
  2024年   1篇
  2023年   1篇
  2022年   2篇
  2021年   4篇
  2020年   12篇
  2019年   11篇
  2018年   13篇
  2017年   12篇
  2016年   10篇
  2015年   6篇
  2014年   7篇
  2013年   93篇
  2012年   7篇
  2011年   9篇
  2010年   11篇
  2009年   24篇
  2008年   7篇
  2007年   15篇
  2006年   5篇
  2005年   7篇
  2004年   6篇
  2003年   4篇
  2002年   3篇
  2001年   3篇
  2000年   2篇
  1999年   1篇
  1998年   1篇
  1997年   1篇
  1995年   1篇
  1993年   1篇
  1990年   2篇
排序方式: 共有282条查询结果,搜索用时 0 毫秒
171.
During social interactions we often have an automatic and unconscious tendency to copy or ‘mimic’ others’ actions. The dominant view on the neural basis of mimicry appeals to an automatic coupling between perception and action. It has been suggested that this coupling is formed through associative learning during correlated sensorimotor experience. Although studies with adult participants have provided support for this hypothesis, little is known about the role of sensorimotor experience in supporting the development of perceptual‐motor couplings, and consequently mimicry behaviour, in infancy. Here we investigated whether the extent to which an observed action elicits mimicry depends on the opportunity an infant has had to develop perceptual‐motor couplings for this action through correlated sensorimotor experience. We found that mothers’ tendency to imitate their 4‐month‐olds’ facial expressions during a parent‐child interaction session was related to infants’ facial mimicry as measured by electromyography. Maternal facial imitation was not related to infants’ mimicry of hand actions, and instead we found preliminary evidence that infants’ tendency to look at their own hands may be related to their tendency to mimic hand actions. These results are consistent with the idea that mimicry is supported by perceptual‐motor couplings that are formed through correlated sensorimotor experience obtained by observing one's own actions and imitative social partners.  相似文献   
172.
Older adults perceive less intense negative emotion in facial expressions compared to younger counterparts. Prior research has also demonstrated that mood alters facial emotion perception. Nevertheless, there is little evidence which evaluates the interactive effects of age and mood on emotion perception. This study investigated the effects of sad mood on younger and older adults’ perception of emotional and neutral faces. Participants rated the intensity of stimuli while listening to sad music and in silence. Measures of mood were administered. Younger and older participants’ rated sad faces as displaying stronger sadness when they experienced sad mood. While younger participants showed no influence of sad mood on happiness ratings of happy faces, older adults rated happy faces as conveying less happiness when they experienced sad mood. This study demonstrates how emotion perception can change when a controlled mood induction procedure is applied to alter mood in young and older participants.  相似文献   
173.
To study different aspects of facial emotion recognition, valid methods are needed. The more widespread methods have some limitations. We propose a more ecological method that consists of presenting dynamic faces and measuring verbal reaction times. We presented 120 video clips depicting a gradual change from a neutral expression to a basic emotion (anger, disgust, fear, happiness, sadness and surprise), and recorded hit rates and reaction times of verbal labelling of emotions. Our results showed that verbal responses to six basic emotions differed in hit rates and reaction times: happiness > surprise > disgust > anger > sadness > fear (this means these emotional responses were more accurate and faster). Generally, our data are in accordance with previous findings, but our differentiation of responses is better than the data from previous experiments on six basic emotions.  相似文献   
174.
175.
We investigated people's ability to infer others’ mental states from their emotional reactions, manipulating whether agents wanted, expected, and caused an outcome. Participants recovered agents’ desires throughout. When the agent observed, but did not cause the outcome, participants’ ability to recover the agent's beliefs depended on the evidence they got (i.e., her reaction only to the actual outcome or to both the expected and actual outcomes; Experiments 1 and 2). When the agent caused the event, participants’ judgments also depended on the probability of the action (Experiments 3 and 4); when actions were improbable given the mental states, people failed to recover the agent's beliefs even when they saw her react to both the anticipated and actual outcomes. A Bayesian model captured human performance throughout (rs ≥ .95), consistent with the proposal that people rationally integrate information about others’ actions and emotional reactions to infer their unobservable mental states.  相似文献   
176.
As intergenerational interactions increase due to an ageing population, the study of emotion-related responses to the elderly is increasingly relevant. Previous research found mixed results regarding affective mimicry – a measure related to liking and affiliation. In the current study, we investigated emotional mimicry to younger and older actors following an encounter with a younger and older player in a Cyberball game. In a complete exclusion condition, in which both younger and older players excluded the participant, we expected emotional mimicry to be stronger for younger vs. older actors. In a partial inclusion condition, in which the younger player excluded while the older player included the participant, we predicted that the difference in player behaviour would lead to a difference in liking. This increased liking of the older interaction partner should reduce the difference in emotional mimicry towards the two different age groups. Results revealed more mimicry for older actors following partial inclusion especially for negative emotions, suggesting inclusive behaviour by an older person in an interaction as a possible means to increase mimicry and affiliation to the elderly.  相似文献   
177.
Various models have been proposed to increase understanding of the cognitive basis of facial emotions. Despite those efforts, interactions between facial emotions have received minimal attention. If collective behaviours relating to each facial emotion in the comprehensive cognitive system could be assumed, specific facial emotion relationship patterns might emerge. In this study, we demonstrate that the frameworks of complex networks can effectively capture those patterns. We generate 81 facial emotion images (6 prototypes and 75 morphs) and then ask participants to rate degrees of similarity in 3240 facial emotion pairs in a paired comparison task. A facial emotion network constructed on the basis of similarity clearly forms a small-world network, which features an extremely short average network distance and close connectivity. Further, even if two facial emotions have opposing valences, they are connected within only two steps. In addition, we show that intermediary morphs are crucial for maintaining full network integration, whereas prototypes are not at all important. These results suggest the existence of collective behaviours in the cognitive systems of facial emotions and also describe why people can efficiently recognize facial emotions in terms of information transmission and propagation. For comparison, we construct three simulated networks—one based on the categorical model, one based on the dimensional model, and one random network. The results reveal that small-world connectivity in facial emotion networks is apparently different from those networks, suggesting that a small-world network is the most suitable model for capturing the cognitive basis of facial emotions.  相似文献   
178.
179.
Two experiments investigated categorical perception (CP) effects for affective facial expressions and linguistic facial expressions from American Sign Language (ASL) for Deaf native signers and hearing non-signers. Facial expressions were presented in isolation (Experiment 1) or in an ASL verb context (Experiment 2). Participants performed ABX discrimination and identification tasks on morphed affective and linguistic facial expression continua. The continua were created by morphing end-point photo exemplars into 11 images, changing linearly from one expression to another in equal steps. For both affective and linguistic expressions, hearing non-signers exhibited better discrimination across category boundaries than within categories for both experiments, thus replicating previous results with affective expressions and demonstrating CP effects for non-canonical facial expressions. Deaf signers, however, showed significant CP effects only for linguistic facial expressions. Subsequent analyses indicated that order of presentation influenced signers’ response time performance for affective facial expressions: viewing linguistic facial expressions first slowed response time for affective facial expressions. We conclude that CP effects for affective facial expressions can be influenced by language experience.  相似文献   
180.
We demonstrate here that initially neutral items can acquire specific value based on their associated outcomes, and that responses of physiological systems to such previously meaningless stimuli can rapidly reflect this associative history. Each participant participated in an associative learning task in which four neutral abstract pictures were each repeatedly paired with one of four foods that varied in valence and magnitude. Over the course of learning, participants’ “liking” ratings of and preferences for each picture came to reflect the value of the food with which it was paired. The abstract pictures also elicited physiological responses characteristic of the foods with which they were paired, including changes in facial electromyography (EMG) and preferential looking. A logistic modeling procedure showed that learning parameters, such as the rate at which participants learned the values associated with the pictures, were similar across food outcomes of different value.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号