首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   451篇
  免费   21篇
  国内免费   16篇
  2024年   2篇
  2023年   2篇
  2022年   6篇
  2021年   10篇
  2020年   18篇
  2019年   18篇
  2018年   19篇
  2017年   25篇
  2016年   18篇
  2015年   13篇
  2014年   18篇
  2013年   139篇
  2012年   9篇
  2011年   18篇
  2010年   15篇
  2009年   25篇
  2008年   12篇
  2007年   29篇
  2006年   17篇
  2005年   15篇
  2004年   13篇
  2003年   7篇
  2002年   7篇
  2001年   4篇
  2000年   5篇
  1999年   3篇
  1998年   7篇
  1997年   4篇
  1995年   3篇
  1993年   3篇
  1992年   1篇
  1990年   2篇
  1985年   1篇
排序方式: 共有488条查询结果,搜索用时 15 毫秒
211.
It has been argued that critical functions of the human amygdala are to modulate the moment-to-moment vigilance level and to enhance the processing and the consolidation of memories of emotionally arousing material. In this functional magnetic resonance study, pictures of human faces bearing fearful, angry, and happy expressions were presented to nine healthy volunteers using a backward masking procedure based on neutral facial expression. Activation of the left and right amygdala in response to the masked fearful faces (compared to neutral faces) was significantly correlated with the number of fearful faces detected. In addition, right but not left amygdala activation in response to the masked angry faces was significantly related to the number of angry faces detected. The present findings underscore the role of the amygdala in the detection and consolidation of memory for marginally perceptible threatening facial expression.  相似文献   
212.
Suzuki A  Hoshino T  Shigemasu K 《Cognition》2006,99(3):327-353
The assessment of individual differences in facial expression recognition is normally required to address two major issues: (1) high agreement level (ceiling effect) and (2) differential difficulty levels across emotions. We propose a new assessment method designed to quantify individual differences in the recognition of the six basic emotions, 'sensitivities to basic emotions in faces.' We attempted to address the two major assessment issues by using morphing techniques and item response theory (IRT). We used morphing to create intermediate, mixed facial expression stimuli with various levels of recognition difficulty. Applying IRT enabled us to estimate the individual latent trait levels underlying the recognition of respective emotions (sensitivity scores), unbiased by stimulus properties that constitute difficulty. In a series of two experiments we demonstrated that the sensitivity scores successfully addressed the two major assessment issues and their concomitant individual variability. Intriguingly, correlational analyses of the sensitivity scores to different emotions produced orthogonality between happy and non-happy emotion recognition. To our knowledge, this is the first report of the independence of happiness recognition, unaffected by stimulus difficulty.  相似文献   
213.
The purpose of the study was to investigate the facial muscle pattern of disgust in comparison to appetence and joy, using an improved facial EMG method. We analyzed the activity of nine facial muscles in forty healthy subjects. The subject group was randomly divided into two groups (oversaturated vs. hungry) of ten women and ten men each. Four different emotions (disgust, appetence, excited-joy and relaxed-joy) were induced by showing pictures from the IAPS. Pre-visible facial muscle activity was measured with a new facial EMG. A Visual Analog Scale (VAS) was established. Disgust is represented by a specific facial muscle pattern involving M.corrugator and M.orbicularis oculi, clearly distinguishing it from the facial patterns of appetence and joy. The intensity of disgust is stronger in a state of hunger than under oversaturation and is altogether stronger in females than in males. Our findings indicate the possibility to explore the entire emotion system successfully through a state-of-the-art psychophysiological method like our EMG device.  相似文献   
214.
We evaluated the effects of functionally equivalent task designs and alternatives, as validated by motion study procedures, on dependent variables (nonadaptive responses, use of alternative, attempts at task, and completed attempts at task) relevant to performing a selected task. First, we evaluated the effects of functionally equivalent task designs on the dependent variables. Second, we evaluated the effects of an efficient functionally equivalent alternative on the variables. Third, we compared the effects of the efficient functionally equivalent alternative with a less efficient functionally equivalent alternative on the same variables. The results showed that the inefficient functionally equivalent task design occasioned higher rates of nonadaptive responses than the efficient functionally equivalent task design. The results also showed that the functionally equivalent task designs and alternatives competed within and across response classes to reduce nonadaptive responses. Mixed results were obtained in comparing the effects of the efficient versus the less efficient functionally equivalent alternatives. We provide evidence for extending the current concept of functional equivalence to include task design responses as well as alternative responses in functional equivalence training.  相似文献   
215.
表象运动推断加工子系统特性的实验研究   总被引:3,自引:1,他引:2  
游旭群  杨治良 《心理科学》1998,21(3):231-233,225
通过检测20名飞行员和10名老年被试及其控制组的表象运动推断加工水平,结果表明:(1)除较易任务上的反应时与其控制组构成显著差异外,飞行员在表象运动推断的加工速度和正确方面均未表现出较大的优势;(2)与较易水平上的任务相比,老年组在较难水平上表象运动推断加工的速度和正确性方面均未同青年组构成显著差异;(3)与青年组被试相比,老年 在加工耗时量方面受加工量增大的影响较大,且在反应正确性方面与青年线相  相似文献   
216.
本研究对在运动图形识别过程中,有关图形的不同特征(颜色、形状等)的加工特性、方向性效应以及不同空间位置(或时间间隔)对图形识别的影响了探讨。结果发现,视觉系统在运动信息加工时,对目标的不同特征的加工存在不均衡性,加工的难度有差异。颜色的加工难度要比形状的加工难度小。视觉系统对不同运动方向上的图形匹配反应特性也存在差异,图形匹配过程受时间或距离因素的影响,匹配的反应时随着两个比较图形之间的空间距离的增加而减少。  相似文献   
217.
A static bar is perceived to dynamically extend from a peripheral cue (illusory line motion (ILM)) or from a part of another figure presented in the previous frame (transformational apparent motion (TAM)). We examined whether visibility for the cue stimuli affected these transformational motions. Continuous flash suppression, one kind of dynamic interocular masking, was used to reduce the visibility for the cue stimuli. Both ILM and TAM significantly occurred when the d' for cue stimuli was zero (Experiment 1) and when the cue stimuli were presented at subthreshold levels (Experiment 2). We discuss that higher‐order motion processing underlying TAM and ILM can be weakly but significantly activated by invisible visual information.  相似文献   
218.
Facial expressions play a crucial role in emotion recognition as compared to other modalities. In this work, an integrated network, which is capable of recognizing emotion intensity levels from facial images in real time using deep learning technique is proposed. The cognitive study of facial expressions based on expression intensity levels are useful in applications such as healthcare, coboting, Industry 4.0 etc. This work proposes to augment emotion recognition with 2 other important parameters, valence and emotion intensity. This helps in better automated responses by a machine to an emotion. The valence model helps in classifying emotion as positive and negative emotions and discrete model classifies emotions as happy, anger, disgust, surprise and neutral state using Convolution Neural Network (CNN). Feature extraction and classification are carried out using CMU Multi-PIE database. The proposed architecture achieves 99.1% and 99.11% accuracy for valence model and discrete model respectively for offline image data with 5-fold cross validation. The average accuracy achieved in real time for valance model and discrete model is 95% & 95.6% respectively. Also, this work contributes to build a new database using facial landmarks, with three intensity levels of facial expressions which helps to classify expressions into low, mild and high intensities. The performance is also tested for different classifiers. The proposed integrated system is configured for real time Human Robot Interaction (HRI) applications on a test bed consisting of Raspberry Pi and RPA platform to assess its performance.  相似文献   
219.
This study investigated audiovisual synchrony perception in a rhythmic context, where the sound was not consequent upon the observed movement. Participants judged synchrony between a bouncing point-light figure and an auditory rhythm in two experiments. Two questions were of interest: (1) whether the reference in the visual movement, with which the auditory beat should coincide, relies on a position or a velocity cue; (2) whether the figure form and motion profile affect synchrony perception. Experiment 1 required synchrony judgment with regard to the same (lowest) position of the movement in four visual conditions: two figure forms (human or non-human) combined with two motion profiles (human or ball trajectory). Whereas figure form did not affect synchrony perception, the point of subjective simultaneity differed between the two motions, suggesting that participants adopted the peak velocity in each downward trajectory as their visual reference. Experiment 2 further demonstrated that, when judgment was required with regard to the highest position, the maximal synchrony response was considerably low for ball motion, which lacked a peak velocity in the upward trajectory. The finding of peak velocity as a cue parallels results of visuomotor synchronization tasks employing biological stimuli, suggesting that synchrony judgment with rhythmic motions relies on the perceived visual beat.  相似文献   
220.
Research has shown that anger faces represent a potent motivational incentive for individuals with high implicit power motive (nPower). However, it is well known that anger expressions can vary in intensity, ranging from mild anger to rage. To examine nPower-relevant emotional intensity processing in anger faces, an ERP oddball task with facial stimuli was utilized, with neutral expressions as the standard and targets varying on anger intensity (50%, 100%, or 150% emotive). Thirty-one college students participated in the experiment (15 low and 16 high nPower persons determined by the Picture Story Exercise, PSE). In comparison with low nPower persons, higher percentage of correct responses was observed for high nPower persons when both groups discriminated low-intensity (50% intensity) anger faces from neutral faces. ERPs between 100% and 150% anger expressions revealed that high-intensity (150%) anger expressions elicited larger P3a and late positive potential (LPP) amplitudes relative to prototypical (100% intensity) anger expressions for power-motivated individuals. Conversely, low nPower participants showed no differences at both P3a and LPP components. These findings demonstrate that persons with high nPower are sensitive to intensity changes in anger faces and their sensitivity increases with the intensity of anger faces.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号