首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   604篇
  免费   36篇
  国内免费   32篇
  2023年   9篇
  2022年   5篇
  2021年   16篇
  2020年   26篇
  2019年   24篇
  2018年   20篇
  2017年   18篇
  2016年   30篇
  2015年   14篇
  2014年   22篇
  2013年   162篇
  2012年   20篇
  2011年   44篇
  2010年   21篇
  2009年   48篇
  2008年   37篇
  2007年   33篇
  2006年   17篇
  2005年   23篇
  2004年   24篇
  2003年   12篇
  2002年   19篇
  2001年   5篇
  2000年   1篇
  1999年   6篇
  1998年   1篇
  1997年   2篇
  1996年   1篇
  1995年   1篇
  1994年   2篇
  1992年   1篇
  1991年   1篇
  1990年   1篇
  1989年   2篇
  1987年   1篇
  1985年   3篇
排序方式: 共有672条查询结果,搜索用时 31 毫秒
631.
Infants' categorization of animals and vehicles based on static vs. dynamic attributes of stimuli was investigated in five experiments (N=158) using a categorization habituation-of-looking paradigm. In Experiment 1, 6-month-olds categorized static color images of animals and vehicles, and in Experiment 2, 6-month-olds categorized dynamic point-light displays showing only motions of the same animals and vehicles. In Experiments 3, 4, and 5, 6- and 9-month-olds were tested in an habituation-transfer paradigm: half of the infants at each age were habituated to static images and tested with dynamic point-light displays, and the other half were habituated to dynamic point-light displays and tested with static images. Six-month-olds did not transfer. Only 9-month-olds who were habituated to dynamic displays showed evidence of category transfer to static images. Together the findings show that 6-month-olds categorize animals and vehicles based on static and dynamic information, and 9-month-olds can transfer dynamic category information to static images. Transfer, static vs. dynamic information, and age effects in infant categorization are discussed.  相似文献   
632.
The purpose of this study was to determine whether hierarchical categorization would result from a combination of contextually controlled conditional discrimination training, stimulus generalization, and stimulus equivalence. First, differential selection responses to a specific stimulus feature were brought under contextual control. This contextual control was hierarchical in that stimuli at the top of the hierarchy all evoked one response, whereas those at the bottom each evoked different responses. The evocative functions of these stimuli generalized in predictable ways along a dimension of physical similarity. Then, these functions were indirectly acquired by a set of nonsense syllables that were related via transitivity relations to the originally trained stimuli. These nonsense syllables effectively served as names for the different stimulus classes within each level of the hierarchy.  相似文献   
633.
It has been suggested that perception without awareness can be demonstrated by a dissociation between performance in objective (forced-choice) and subjective (yes–no) tasks, and such dissociations have been reported both for simple stimuli and more complex ones including faces. However, signal detection theory (SDT) indicates that the subjective measures used to assess awareness in such studies can be affected by response bias, which could account for the observed dissociation, and this was confirmed by Balsdon and Azzopardi (2015) using simple visual targets. However, this finding may not apply to all types of stimulus, as the detectability of complex targets such as faces is known to be affected by their configuration as well as by their stimulus energy. We tested this with a comparison of forced-choice and yes–no detection of facial stimuli depicting happy or angry or fearful expressions using a backward masking paradigm, and using SDT methods including correcting for unequal variances in the underlying signal distributions, to measure sensitivity independently of response criterion in 12 normal observers. In 47 out 48 comparisons there was no significant difference between sensitivity (da) in the two tasks: hence, across the range of expressions tested it appears that the configuration of complex stimuli does not enhance detectability independently of awareness. The results imply that, on the basis of psychophysical experiments in normal observers, there is no reason to postulate that performance and awareness are mediated by separate processes.  相似文献   
634.
Abstract: We examined the effect of the stimulus type and semantic categorization of the unexpected stimulus on sustained Inattentional Blindness (IB). Results showed that observers could establish attentional set based on a higher level of semantic categorization, which tuned one's attention to prioritizing semantic content over others. The unexpected stimulus, congruent with the attended objects in semantic categorization, was more likely to be noticed, whereas the incongruent semantic stimulus seemed to be unseen. Semantic category‐level attentional set played a crucial role in breaking through IB. The semantically congruent Chinese characters stimulus was detected and recognized more often than a semantically congruent picture stimulus, indicating that Chinese characters had more power to attract attention to escape sustained IB than pictures involved in visual processing. Presumably the finding of Chinese characters breaking through IB more easily might be due to the fact that Chinese characters look more distinct from pictures, rather than Chinese characters being processed more easily. Further research should be taken to test the semantic processing efficiency between pictures and Chinese characters in sustained IB.  相似文献   
635.
Pigeons are well known for their visual capabilities as well as their ability to categorize visual stimuli at both the basic and superordinate level. We adopt a reverse engineering approach to study categorization learning: Instead of training pigeons on predefined categories, we simply present stimuli and analyze neural output in search of categorical clustering on a solely neural level. We presented artificial stimuli, pictorial and grating stimuli, to pigeons without the need of any differential behavioral responding while recording from the nidopallium frontolaterale (NFL), a higher visual area in the avian brain. The pictorial stimuli differed in color and shape; the gratings differed in spatial frequency and amplitude. We computed representational dissimilarity matrices to reveal categorical clustering based on both neural data and pecking behavior. Based on neural output of the NFL, pictorial and grating stimuli were differentially represented in the brain. Pecking behavior showed a similar pattern, but to a lesser extent. A further subclustering within pictorial stimuli according to color and shape, and within gratings according to frequency and amplitude, was not present. Our study gives proof‐of‐concept that this reverse engineering approach—namely reading out categorical information from neural data—can be quite helpful in understanding the neural underpinnings of categorization learning.  相似文献   
636.
There is a view that faces and objects are processed by different brain mechanisms. Different factors may modulate the extent to which face mechanisms are used for objects. To distinguish these factors, we present a new parametric multipart three-dimensional object set that provides researchers with a rich degree of control of important features for visual recognition such as individual parts and the spatial configuration of those parts. All other properties being equal, we demonstrate that perceived facelikeness in terms of spatial configuration facilitated performance at matching individual exemplars of the new object set across viewpoint changes (Experiment 1). Importantly, facelikeness did not affect perceptual discriminability (Experiment 2) or similarity (Experiment 3). Our findings suggest that perceptual resemblance to faces based on spatial configuration of parts is important for visual recognition even after equating physical and perceptual similarity. Furthermore, the large parametrically controlled object set and the standardized procedures to generate additional exemplars will provide the research community with invaluable tools to further understand visual recognition and visual learning.  相似文献   
637.
Investigation of whole-part and composite effects in 4- to 6-year-old children gave rise to claims that face perception is fully mature within the first decade of life (Crookes & McKone, 2009). However, only internal features were tested, and the role of external features was not addressed, although external features are highly relevant for holistic face perception (Sinha & Poggio, 1996; Axelrod & Yovel, 2010, 2011). In this study, 8- to 10-year-old children and adults performed a same–different matching task with faces and watches. In this task participants attended to either internal or external features. Holistic face perception was tested using a congruency paradigm, in which face and non-face stimuli either agreed or disagreed in both features (congruent contexts) or just in the attended ones (incongruent contexts). In both age groups, pronounced context congruency and inversion effects were found for faces, but not for watches. These findings indicate holistic feature integration for faces. While inversion effects were highly similar in both age groups, context congruency effects were stronger for children. Moreover, children's face matching performance was generally better when attending to external compared to internal features. Adults tended to perform better when attending to internal features. Our results indicate that both adults and 8- to 10-year-old children integrate external and internal facial features into holistic face representations. However, in children's face representations external features are much more relevant. These findings suggest that face perception is holistic but still not adult-like at the end of the first decade of life.  相似文献   
638.
This study considers the conception that drawing or copying a face that is vertically inverted will improve the accuracy of the drawing by preventing holistic interference. We used a novel parameterized face space both for generating face stimuli and for measuring the physical accuracy of drawings. One group of participants (the artists) were asked to draw 16 parameterized faces (eight upright and eight inverted). We computed two physical measures of accuracy by comparing the face-space representation of each drawing to the original face. A second and third group of participants (the raters) compared the similarity between each original face and each pair of drawings of that face (one upright and one inverted per artist). For the second group, all faces were presented upright; for the third group, all faces were presented inverted. Our results showed that upright drawings were more accurate than inverted drawings, both in terms of the physical face-space measure and in terms of the perceptual judgments for both orientations. Our data suggest that holistic processing may aid rather than hinder face drawing accuracy.  相似文献   
639.
Facial expression recognition in a wild situation is a challenging problem in computer vision research due to different circumstances, such as pose dissimilarity, age, lighting conditions, occlusions, etc. Numerous methods, such as point tracking, piecewise affine transformation, compact Euclidean space, modified local directional pattern, and dictionary-based component separation have been applied to solve this problem. In this paper, we have proposed a deep learning–based automatic wild facial expression recognition system where we have implemented an incremental active learning framework using the VGG16 model developed by the Visual Geometry Group. We have gathered a large amount of unlabeled facial expression data from Intelligent Technology Lab (ITLab) members at Inha University, Republic of Korea, to train our incremental active learning framework. We have collected these data under five different lighting conditions: good lighting, average lighting, close to the camera, far from the camera, and natural lighting and with seven facial expressions: happy, disgusted, sad, angry, surprised, fear, and neutral. Our facial recognition framework has been adapted from a multi-task cascaded convolutional network detector. Repeating the entire process helps obtain better performance. Our experimental results have demonstrated that incremental active learning improves the starting baseline accuracy from 63% to average 88% on ITLab dataset on wild environment. We also present extensive results on face expression benchmark such as Extended Cohn-Kanade Dataset, as well as ITLab face dataset captured in wild environment and obtained better performance than state-of-the-art approaches.  相似文献   
640.
Personality as Performance   总被引:1,自引:0,他引:1  
Abstract— As people seek to understand events within the world, they develop habitual tendencies related to categorization. Such tendencies can be measured by tasks that determine the relative ease or difficulty a person has in making a given distinction (e.g., between threatening and nonthreatening events). Researchers have sought to determine how categorization tendencies relate to personality traits on the one hand and emotional outcomes on the other. The results indicate that traits and categorization tendencies are distinct manifestations of personality. However, they often interact with each other. Three distinct interactive patterns are described. Categorization clearly does play a role in personality functioning, but its role goes beyond assimilation effects on behavior and experience.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号