首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   981篇
  免费   328篇
  国内免费   26篇
  2024年   1篇
  2023年   20篇
  2022年   13篇
  2021年   30篇
  2020年   57篇
  2019年   51篇
  2018年   47篇
  2017年   44篇
  2016年   55篇
  2015年   39篇
  2014年   38篇
  2013年   188篇
  2012年   41篇
  2011年   61篇
  2010年   34篇
  2009年   65篇
  2008年   46篇
  2007年   41篇
  2006年   32篇
  2005年   40篇
  2004年   22篇
  2003年   29篇
  2002年   36篇
  2001年   12篇
  2000年   21篇
  1999年   18篇
  1998年   13篇
  1997年   13篇
  1996年   10篇
  1995年   12篇
  1994年   12篇
  1993年   22篇
  1992年   13篇
  1991年   13篇
  1990年   7篇
  1989年   11篇
  1988年   9篇
  1987年   10篇
  1986年   5篇
  1985年   8篇
  1984年   10篇
  1983年   9篇
  1982年   14篇
  1981年   5篇
  1980年   9篇
  1979年   13篇
  1978年   10篇
  1977年   11篇
  1976年   6篇
  1975年   9篇
排序方式: 共有1335条查询结果,搜索用时 0 毫秒
161.
When confronted with multiple bistable stimuli at the same time, the visual system tends to generate a common interpretation for all stimuli. We exploit this perceptual-coupling phenomenon to investigate the perception of depth in bistable point-light figures. Observers indicate the global depth orientation of simultaneously presented point-light figures while the similarity between the stimuli is manipulated. In a first experiment, a higher occurrence of coupled percepts is found for identical figures, but coupling breaks down when either the movement pattern or both the viewpoint and the phase-relation are changed. A second experiment confirms these results, but also demonstrates that different point-light actions can be subject to perceptual coupling as long as they share the same viewpoint and exhibit equivalent degrees of perceptual ambiguity. The data are consistent with an explanation in terms of differential contributions of stored, view-dependent object and action representations and of an interaction between stages processing local stimulus features. The consequences of these results are discussed in the framework of an explicit model for the perception of depth in biological motion.  相似文献   
162.
We report two experiments in which participants categorized target words (e.g., BLOOD or CUCUMBER) according to their canonical colour of red or green by pointing to a red square on the left or a green square on the right. Unbeknownst to the participants, the target words were preceded by the prime words “red” or “green”. We found that the curvature of participants’ pointing trajectories was greater following incongruent primes (green–BLOOD) than it was following congruent primes, indicating that individuals initiated a response on the basis of the prime and then corrected that response mid-flight. This finding establishes that the processing of masked orthographic stimuli extends down to include the formulation of an overt manual response.  相似文献   
163.
Jeesun Kim 《Visual cognition》2013,21(7):1017-1033
The study examined the effect that auditory information (speaker language/accent: Japanese or French) had on the processing of visual information (the speaker's race: Asian or Caucasian) in two forced-choice tasks: Classification and perceptual judgement on animated talking characters. Two (male and female) sets of facial morphs were constructed such that a 3-D head of Caucasian appearance was gradually morphed (in 11 steps) into one of Asian appearance. Each facial morph was animated in association with spoken French/Japanese or English with a French/Japanese accent. To examine the auditory effect, each animation was played with or without sound. Experiment 1 used an Asian or Caucasian classification task. Results showed that faces heard in conjunction with Japanese or a Japanese accent were more likely to be classified as Asian compared to those presented without sound. Experiment 2 used a same or different judgement task. Results showed that accuracy was improved by hearing a Japanese accent compared to without sound. These results were discussed in terms of the voice information acting as a cue to assist in organizing and attending to face features.  相似文献   
164.
The present study aims to explore the influence of facial emotional expressions on pre-scholars' identity recognition was analyzed using a two-alternative forced-choice matching task. A decrement was observed in children's performance with emotional faces compared with neutral faces, both when a happy emotional expression remained unchanged between the target face and the test faces and when the expression changed from happy to neutral or from neutral to happy between the target and the test faces (Experiment 1). Negative emotional expressions (i.e. fear and anger) also interfered with children's identity recognition (Experiment 2). Obtained evidence suggests that in preschool-age children, facial emotional expressions are processed in interaction with, rather than independently from, the encoding of facial identity information. The results are discussed in relationship with relevant research conducted with adults and children.  相似文献   
165.
Young infants are capable of integrating auditory and visual information and their speech perception can be influenced by visual cues, while 5-month-olds detect mismatch between mouth articulations and speech sounds. From 6 months of age, infants gradually shift their attention away from eyes and towards the mouth in articulating faces, potentially to benefit from intersensory redundancy of audiovisual (AV) cues. Using eye tracking, we investigated whether 6- to 9-month-olds showed a similar age-related increase of looking to the mouth, while observing congruent and/or redundant versus mismatched and non-redundant speech cues. Participants distinguished between congruent and incongruent AV cues as reflected by the amount of looking to the mouth. They showed an age-related increase in attention to the mouth, but only for non-redundant, mismatched AV speech cues. Our results highlight the role of intersensory redundancy and audiovisual mismatch mechanisms in facilitating the development of speech processing in infants under 12 months of age.  相似文献   
166.
Counter-terrorism officials in the USA and the UK responded to the events of 11 September 2001 and 7 July 2005 with an increasing resort to the use of ‘intelligence-led policing’ methods such as racial and religious profiling. Reliance on intelligence, to the effect that most people who commit a certain crime have a certain ethnicity, can lead to less favourable treatment of an individual with that ethnicity because of his membership in that group, not because of any act he is suspected or known to have committed. This paper explains the context in which intelligence-led policing flourishes, and how this discussion contributes to the profiling debate in both the USA and the UK, and then sets out two key contentions. First, we argue that Article 14 ECHR as applied under the UK Human Rights Act has a more protective, and less ‘prosecutorial’, conception of discrimination than has the US Equal Protection Clause, meaning that judges need not find a discriminatory motive to find that discrimination has occurred. Second, we contend that Article 14 provides the judiciary with the key tool of proportionality, which, when properly applied, makes it harder for discrimination to stand up to scrutiny.  相似文献   
167.
Using traditional face perception paradigms the current study explores unfamiliar face processing in two neurodevelopmental disorders. Previous research indicates that autism and Williams syndrome (WS) are both associated with atypical face processing strategies. The current research involves these groups in an exploration of feature salience for processing the eye and mouth regions of unfamiliar faces. The tasks specifically probe unfamiliar face matching by using (a) upper or lower face features, (b) the Thatcher illusion, and (c) featural and configural face modifications to the eye and mouth regions. Across tasks, individuals with WS mirror the typical pattern of performance, with greater accuracy for matching faces using the upper than using the lower features, susceptibility to the Thatcher illusion, and greater detection of eye than mouth modifications. Participants with autism show a generalized performance decrement alongside atypicalities, deficits for utilizing the eye region, and configural face cues to match unfamiliar faces. The results are discussed in terms of feature salience, structural encoding, and the phenotypes typically associated with these neurodevelopmental disorders.  相似文献   
168.
Two experiments investigated the role that different face regions play in a variety of social judgements that are commonly made from facial appearance (sex, age, distinctiveness, attractiveness, approachability, trustworthiness, and intelligence). These judgements lie along a continuum from those with a clear physical basis and high consequent accuracy (sex, age) to judgements that can achieve a degree of consensus between observers despite having little known validity (intelligence, trustworthiness). Results from Experiment 1 indicated that the face's internal features (eyes, nose, and mouth) provide information that is more useful for social inferences than the external features (hair, face shape, ears, and chin), especially when judging traits such as approachability and trustworthiness. Experiment 2 investigated how judgement agreement was affected when the upper head, eye, nose, or mouth regions were presented in isolation or when these regions were obscured. A different pattern of results emerged for different characteristics, indicating that different types of facial information are used in the various judgements. Moreover, the informativeness of a particular region/feature depends on whether it is presented alone or in the context of the whole face. These findings provide evidence for the importance of holistic processing in making social attributions from facial appearance.  相似文献   
169.
In 1932, Frederic Bartlett laid the foundation for the later schema theory. His key assumption of previous knowledge affecting the processing of new stimuli was illustrated in the famous “portrait d'homme” series. Sequenced reproductions of ambiguous stimuli showed progressive object-likeness. As Bartlett pointed out, activation of specific schemata, for instance “the face schema”, biases memory retrieval towards such schemata. In five experiments (Experiment 1, n?=?53; Experiment 2, n?=?177; Experiment 3, n?=?36; Experiment 4, n?=?6; Experiment 5, n?=?2), we tested several factors potentially influencing retrieval biases—for example, by varying the general procedure of reproduction (repeated vs. serial) and by omitting versus providing visual or semantic cues for activating face schemata. Participants inspected face-like stimuli with the caption “portrait of the human” and reproduced them repeatedly under specific conditions. None of the experiments revealed a systematic tendency towards Bartlett's described case, even when the participants were explicitly instructed to draw “a face” like the previously inspected one. In one of the “serial reproduction” experiments, we even obtained contrary effects with decreasing face-likeness over the reproduction generations. A close analysis of the original findings raises questions about the replicability of Bartlett's findings, qualifying the “portrait d'homme” series more or less as an illustrative example of the main idea of reconstructive memory.  相似文献   
170.
Face recognition and word reading are thought to be mediated by relatively independent cognitive systems lateralised to the right and left hemispheres, respectively. In this case, we should expect a higher incidence of face recognition problems in patients with right hemisphere injury and a higher incidence of reading problems in patients with left hemisphere injury. We tested this hypothesis in a group of 31 patients with unilateral right or left hemisphere infarcts in the territory of the posterior cerebral arteries. In most domains tested (e.g., visual attention, object recognition, visuo-construction, motion perception), we found that both patient groups performed significantly worse than a matched control group. In particular, we found a significant number of face recognition deficits in patients with left hemisphere injury and a significant number of patients with word reading deficits following right hemisphere injury. This suggests that face recognition and word reading may be mediated by more bilaterally distributed neural systems than is commonly assumed.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号