首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   16篇
  免费   0篇
  2020年   2篇
  2017年   1篇
  2016年   1篇
  2013年   4篇
  2012年   2篇
  2011年   3篇
  2007年   2篇
  2006年   1篇
排序方式: 共有16条查询结果,搜索用时 31 毫秒
1.
Prior evidence has shown that aversive emotional states are characterised by an attentional bias towards aversive events. The present study investigated whether aversive emotions also bias attention towards stimuli that represent means by which the emotion can be alleviated. We induced disgust by having participants touch fake disgusting objects. Participants in the control condition touched non-disgusting objects. The results of a subsequent dot-probe task revealed that attention was oriented to disgusting pictures irrespective of condition. However, participants in the disgust condition also oriented towards pictures representing cleanliness. These findings suggest that the deployment of attention in aversive emotional states is not purely stimulus driven but is also guided by the goal to alleviate this emotional state.  相似文献   
2.
3.
In our study, we aimed to analyse the effect of child gender on parental and child interactive play behaviour, as well as to determine relations between parental general knowledge of child development and parental play behaviour in two developmental periods, namely toddlerhood and early childhood. The sample included 99 children (50 toddlers 1–3 years-old; 49 preschoolers 3–5 years-old) and their parents. Parent–child interactive play with a standard set of toys was observed and assessed in the home setting. We found that parental and child play behaviours were closely related in both age groups. In addition, child’s gender affected child, but not parental, play behaviour such that girls more frequently established the content of play, sustained play frame, and used more symbolic transformations during play than boys did. Parents’ general knowledge of child development was associated with both parental education and parental play behaviour. The findings are applicable to different professionals working with children and their parents in the preschool period.  相似文献   
4.
5.
Independent lines of evidence suggest that the representation of emotional evaluation recruits both vertical and horizontal spatial mappings. These two spatial mappings differ in their experiential origins and their productivity, and available data suggest that they differ in their saliency. Yet, no study has so far compared their relative strength in an attentional orienting reaction time task that affords the simultaneous manifestation of both types of mapping. Here, we investigated this question using a visual search task with emotional faces. We presented angry and happy face targets and neutral distracter faces in top, bottom, left, and right locations on the computer screen. Conceptual congruency effects were observed along the vertical dimension supporting the ‘up = good’ metaphor, but not along the horizontal dimension. This asymmetrical processing pattern was observed when faces were presented in a cropped (Experiment 1) and whole (Experiment 2) format. These findings suggest that the ‘up = good’ metaphor is more salient and readily activated than the ‘right = good’ metaphor, and that the former outcompetes the latter when the task context affords the simultaneous activation of both mappings.  相似文献   
6.
Prior evidence has shown that aversive emotional states are characterised by an attentional bias towards aversive events. The present study investigated whether aversive emotions also bias attention towards stimuli that represent means by which the emotion can be alleviated. We induced disgust by having participants touch fake disgusting objects. Participants in the control condition touched non-disgusting objects. The results of a subsequent dot-probe task revealed that attention was oriented to disgusting pictures irrespective of condition. However, participants in the disgust condition also oriented towards pictures representing cleanliness. These findings suggest that the deployment of attention in aversive emotional states is not purely stimulus driven but is also guided by the goal to alleviate this emotional state.  相似文献   
7.
Damjanovic and Hanley (2007) showed that episodic information is more readily retrieved from familiar faces than familiar voices, even when the two presentation modalities are matched for overall recognition rates by blurring the faces. This pattern of performance contrasts with the results obtained by Hanley and Turner (2000) who showed that semantic information could be recalled equally easily from familiar blurred faces and voices. The current study used the procedure developed by Hanley and Turner (2000) and applied it to the stimuli used by Damjanovic and Hanley (2007). The findings showed a marked decrease in retrieval of occupations and names from familiar voices relative to blurred faces even though the two modalities were matched for overall levels of recognition and rated familiarity. Similar results were obtained in Experiment 2 in which the same participants were asked to recognise both faces and voices. It is argued that these findings pose problems for any model of person recognition (e.g., Burton, Bruce, & Johnston, 1990) in which familiarity decisions occur beyond the point at which information from different modalities has been integrated.  相似文献   
8.
Given that semantic processes mediate early processes in the elicitation of emotions, we expect that already activated emotion-specific information can influence the elicitation of an emotion. In Experiment 1, participants were exposed to masked International Affective Picture System (IAPS) pictures that elicited either disgust or fear. Following the presentation of the primes, other IAPS pictures were presented as targets that elicited either disgust or fear. The participants' task was to classify the target picture as either disgust or fear evoking. In Experiment 2, we substituted the IAPS primes with facial expressions of either disgust or fear. In Experiment 3, we substituted the IAPS primes with the words disgust or fear. In all three experiments, we found that prime-target combinations of the same emotion were responded to faster than prime-target combinations of different emotions. Our findings suggest that the influence of primes on the elicitation of emotion is mediated by activated schemata or appraisal processes.  相似文献   
9.
Facial expressions such as smiling or frowning are normally followed by, and often aim at, the observation of corresponding facial expressions in social counterparts. Given this contingency between one’s own and other persons’ facial expressions, the production of such facial actions might be the subject of so-called action–effect compatibility effects. In the present Experiment 1, we confirmed this assumption. Participants were required to smile or frown. The generation of these expressions was harder when participants produced predictable feedback from a virtual counterpart that was incompatible with their own facial expression; for example, smiling produced the presentation of a frowning face. The results of Experiment 2 revealed that this effect vanishes with inverted faces as action feedback, which shows that the phenomenon is bound to the instantaneous emotional interpretation of the feedback. These results comply with the assumption that the generation of facial expressions is controlled by an anticipation of these expressions’ effects in the social environment.  相似文献   
10.
Four experiments probed the nature of categorical perception (CP) for facial expressions. A model based on naming alone failed to accurately predict performance onthese tasks. The data are instead consistentwith an extension of the category adjustment model (Huttenlocher et al., 2000), in which the generation of a verbal code (e.g., "happy") activated knowledge ofthe expression category's range andcentral tendency (prototype) in memory, which was retained as veridical perceptual memory faded.Further support for amemory bias toward the category center came from a consistently asymmetric pattern of within-category errors. Verbal interference in the retention interval selectively removed CP for facial expressions, under blocked, but not under randomized presentation conditions. However, verbal interference at encoding removed CPeven under randomized conditions and these effects were shown to extend even to caricatured expressions, which lie outside the normal range of expression categories.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号