全文获取类型
收费全文 | 22975篇 |
免费 | 132篇 |
国内免费 | 2篇 |
出版年
2023年 | 41篇 |
2022年 | 41篇 |
2021年 | 51篇 |
2020年 | 80篇 |
2019年 | 58篇 |
2018年 | 3545篇 |
2017年 | 2879篇 |
2016年 | 2336篇 |
2015年 | 272篇 |
2014年 | 165篇 |
2013年 | 379篇 |
2012年 | 736篇 |
2011年 | 2488篇 |
2010年 | 2592篇 |
2009年 | 1546篇 |
2008年 | 1800篇 |
2007年 | 2278篇 |
2006年 | 106篇 |
2005年 | 288篇 |
2004年 | 231篇 |
2003年 | 163篇 |
2002年 | 114篇 |
2001年 | 89篇 |
2000年 | 108篇 |
1999年 | 55篇 |
1998年 | 37篇 |
1997年 | 35篇 |
1996年 | 22篇 |
1995年 | 13篇 |
1994年 | 14篇 |
1993年 | 13篇 |
1992年 | 19篇 |
1991年 | 17篇 |
1990年 | 37篇 |
1989年 | 26篇 |
1988年 | 16篇 |
1987年 | 29篇 |
1986年 | 22篇 |
1985年 | 19篇 |
1984年 | 23篇 |
1983年 | 17篇 |
1979年 | 28篇 |
1976年 | 15篇 |
1975年 | 20篇 |
1974年 | 23篇 |
1973年 | 18篇 |
1972年 | 14篇 |
1971年 | 16篇 |
1970年 | 19篇 |
1968年 | 18篇 |
排序方式: 共有10000条查询结果,搜索用时 15 毫秒
221.
In modern digital applications, users often interact with virtual representations of themselves or others, called avatars. We examined how these avatars and their perspectives influence stimulus–response compatibility in a Simon task. Participants responded to light/dark blue stimuli with left/right key presses in the presence of a task-irrelevant avatar. Changes in stimulus–response compatibility were used to quantify changes in the mental representation of the task and perspective taking toward this avatar. Experiments 1 and 2 showed that perspective taking for an avatar occurred in orthogonal stimulus–response mappings, causing a compatibility effect from the avatar’s point of view. In the following two experiments we introduced a larger variety of angular disparities between the participant and avatar. In Experiment 3, the Simon effect with lateralized stimulus positions remained largely unaffected by the avatar, pointing toward an absence of perspective taking. In Experiment 4, after avatar hand movements were added in order to strengthen the participants’ sense of agency over the avatar, a spatial compatibility effect from the avatar’s perspective was observed again, and hints of the selective use of perspective taking on a trial-by-trial basis were found. Overall, the results indicate that users can incorporate the perspective of an avatar into their mental representation of a situation, even when this perspective is unnecessary to complete a task, but that certain contextual requirements have to be met. 相似文献
222.
Hübner R 《Perception & psychophysics》2001,63(6):945-951
Guided Search 2 (GS2) is currently one of the most detailed models of visual search and has been used to predict search times for different stimulus conditions by means of detailed computer simulations. The present article goes a step further and presents formulas that allow for the calculation of the search times and their variances. Moreover, these formulas can be applied to fit GS2 to data. An example is provided in which GS2 is fitted to search functions representing search asymmetries. 相似文献
223.
In a series of four experiments, we investigated the conditions under which target-absent responses are faster than target-present responses in visual search. Previous experiments have shown that such an absent-advantage occurs mainly for homogeneous distractors arranged in a regular pattern. From these results, it has been concluded that the absent-advantage is due to perceptual processes, such as grouping by similarity. Our data show that such processes are not sufficient. Rather, the absent-advantage is the result of interactions between perceptual and decisional processes. Certain perceptual conditions, such as randomizing stimulus patterns, lead to specific criteria settings that produce an absent-advantage. That such an account can explain our main results is demonstrated by modeling our data with a modified version of the Guided Search 2 model. 相似文献
224.
Walking or talking? Behavioral and neurophysiological correlates of action verb processing 总被引:6,自引:0,他引:6
Brain activity elicited by visually presented words was investigated using behavioral measures and current source densities calculated from high-resolution EEG recordings. Verbs referring to actions usually performed with different body parts were compared. Behavioral data indicated faster processing of verbs referring to actions performed with the face muscles and articulators (face-related words) compared to verbs referring to movements involving the lower half of the body (leg-related words). Significant topographical differences in brain activity elicited by verb types were found starting approximately 250 ms after word onset. Differences were seen at recording sites located over the motor strip and adjacent frontal cortex. At the vertex, close to the cortical representation of the leg, leg-related verbs (for example, to walk) produced strongest in-going currents, whereas for face-related verbs (for example, to talk) the most in-going activity was seen at more lateral electrodes placed over the left Sylvian fissure, close to the representation of the articulators. Thus, action words caused differential activation along the motor strip, with strongest in-going activity occurring close to the cortical representation of the body parts primarily used for carrying out the actions the verbs refer to. Topographically specific physiological signs of word processing started earlier for face-related words and lasted longer for verbs referring to leg movements. We conclude that verb types can differ in their processing speed and can elicit neurophysiological activity with different cortical topographies. These behavioral and physiological differences can be related to cognitive processes, in particular to lexical semantic access. Our results are consistent with associative theories postulating that words are organized in the brain as distributed cell assemblies whose cortical distributions reflect the words' meanings. 相似文献
225.
Viewpoint dependence in visual and haptic object recognition 总被引:5,自引:0,他引:5
On the whole, people recognize objects best when they see the objects from a familiar view and worse when they see the objects from views that were previously occluded from sight. Unexpectedly, we found haptic object recognition to be viewpoint-specific as well, even though hand movements were unrestricted. This viewpoint dependence was due to the hands preferring the back "view" of the objects. Furthermore, when the sensory modalities (visual vs. haptic) differed between learning an object and recognizing it, recognition performance was best when the objects were rotated back-to-front between learning and recognition. Our data indicate that the visual system recognizes the front view of objects best, whereas the hand recognizes objects best from the back. 相似文献
226.
On the basis of a systems theoretical approach it was hypothesized that event-related potentials (ERPs) are superpositions of stimulus-evoked and time-locked EEG rhythms reflecting resonance properties of the brain (Ba?ar, 1980). This approach led to frequency analysis of ERPs as a way of analyzing evoked rhythms. The present article outlines the basic features of ERP frequency analysis in comparison to ERP wavelet analysis, a recently introduced method of time-frequency analysis. Both methods were used in an investigation of the functional correlates of evoked rhythms where auditory and visual ERPs were recorded from the cat brain. Intracranial electrodes were located in the primary auditory cortex and in the primary visual cortex thus permitting "cross-modality" experiments. Responses to adequate stimulation (e.g., visual ERP recorded from the visual cortex) were characterized by high amplitude alpha (8-16 Hz) responses which were not observed for inadequate stimulation. This result is interpreted as a hint at a special role of alpha responses in primary sensory processing. The results of frequency analysis and of wavelet analysis were quite similar, with possible advantages of wavelet methods for single-trial analysis. The results of frequency analysis as performed earlier were thus confirmed by wavelet analysis. This supports the view that ERP frequency components correspond to evoked rhythms with a distinct biological significance. 相似文献
227.
Zusammenfassung. Das hier vorgestellte Modell lernt graduell, Planungsaufgaben aus der Klasse der Maschinenbelegungsprobleme (job-shop-scheduling problems) zu lösen. Mit Hilfe des Chunking-Mechanismus von Soar wird episodisches Wissen über die Belegungsreihenfolge von Aufträgen auf Maschinen memoriert. Bei der Entwicklung des Modells wurden zahlreiche qualitative (z. B. Transfereffekte) und quantitative Befunde (z. B. Bearbeitungszeiten) aus einer früheren empirischen Untersuchung berücksichtigt. In einer Validierungsstudie wurden dieselben Aufgaben von 14 Probanden und dem Modell bearbeitet. Die Passung von Simulationsdaten und empirischen Ergebnissen fiel insgesamt gut aus. Allerdings löst das Modell die Aufgaben schneller und zeigt auch einen etwas besseren Lernverlauf als die Probanden. Das Modell liefert eine Erklärung für das Rauschen, das typischerweise bei Bearbeitungszeiten zu beobachten ist: es handelt sich um erworbenes Wissen, das mehr oder weniger gut und auch unterschiedlich häufig auf neue Situationen übertragen wird. Der Lernverlauf der Probanden entspricht nur für aggregierte Daten einer Potenzfunktion (power law). Der vorgestellte Mechanismus zeigt, wie ein symbolisches Modell der Informationsverarbeitung graduelle Verhaltensänderungen generiert und wie der offensichtliche Erwerb allgemeiner Prozeduren ohne explizites Lernen von deklarativen Regeln erfolgen kann. Es wird nahegelegt, daß es sich hier um die Modellierung einer Form impliziten Lernens handelt. Summary. The model presented here gradually learns how to perform a job-shop scheduling task. It uses Soar's chunking mechanism to acquire episodic memories about the order to schedule jobs. The model was based on many qualitative (e.g., transfer effects) and quantitative (e.g., solution time) regularities found in previously collected data. The model was tested with new data where scheduling tasks were given to the model and to 14 subjects. The model generally fit these data with the restrictions that the model performs the task (in simulated time) faster than the subjects, and its performance improves somewhat more quickly than the subjects' performance. The model provides an explanation of the noise typically found in problem solving times - it is the result of learning actual pieces of knowledge that transfer more or less to new situations but rarely by an average amount. Only when the data are averaged (i.e., over subjects) does the smooth power law appear. This mechanism demonstrates how symbolic models can exhibit a gradual change in behavior and how the apparent acquisition of general procedures can be performed without resorting to explicit declarative rule generation. We suggest that this may represent a type of implicit learning. 相似文献
228.
229.
230.
Luc Steels 《Kognitionswissenschaft》1999,8(4):143-150
Linguistics must again concentrate on the evolutionary nature of language, so that language models are more realistic with respect to human natural languages and have a greater explanatory force. Multi-agent systems are proposed as a possible route to develop such evolutionary models and an example is given of a concrete experiment in the origins and evolution of word-meaning based on a multi-agent approach. 相似文献