全文获取类型
收费全文 | 1044篇 |
免费 | 31篇 |
国内免费 | 3篇 |
专业分类
1078篇 |
出版年
2023年 | 6篇 |
2022年 | 8篇 |
2021年 | 33篇 |
2020年 | 39篇 |
2019年 | 33篇 |
2018年 | 23篇 |
2017年 | 52篇 |
2016年 | 51篇 |
2015年 | 42篇 |
2014年 | 61篇 |
2013年 | 323篇 |
2012年 | 19篇 |
2011年 | 81篇 |
2010年 | 38篇 |
2009年 | 60篇 |
2008年 | 53篇 |
2007年 | 35篇 |
2006年 | 26篇 |
2005年 | 18篇 |
2004年 | 27篇 |
2003年 | 18篇 |
2002年 | 12篇 |
2001年 | 5篇 |
2000年 | 1篇 |
1999年 | 4篇 |
1998年 | 3篇 |
1995年 | 2篇 |
1993年 | 1篇 |
1991年 | 1篇 |
1980年 | 1篇 |
1979年 | 1篇 |
1978年 | 1篇 |
排序方式: 共有1078条查询结果,搜索用时 0 毫秒
141.
Kathleen B. McDermott Cynthia L. Wooldridge Heather J. Rice Jeffrey J. Berg Karl K. Szpunar 《Quarterly journal of experimental psychology (2006)》2016,69(2):243-253
According to the constructive episodic simulation hypothesis, remembering and episodic future thinking are supported by a common set of constructive processes. In the present study, we directly addressed this assertion in the context of third-person perspectives that arise during remembering and episodic future thought. Specifically, we examined the frequency with which participants remembered past events or imagined future events from third-person perspectives. We also examined the different viewpoints from which third-person perspective events were remembered or imagined. Although future events were somewhat more likely to be imagined from a third-person perspective, the spatial viewpoint distributions of third-person perspectives characterizing remembered and imagined events were highly similar. These results suggest that a similar constructive mechanism may be at work when people remember events from a perspective that could not have been experienced in the past and when they imagine events from a perspective that could not be experienced in the future. The findings are discussed in terms of their consistency with—and as extensions of—the constructive episodic simulation hypothesis. 相似文献
142.
Christie Haskell 《Quarterly journal of experimental psychology (2006)》2016,69(11):2147-2165
Are the effects of memory and attention on perception synergistic, antagonistic, or independent? Tested separately, memory and attention have been shown to affect the accuracy of orientation judgements. When multiple stimuli are presented sequentially versus simultaneously, error variance is reduced. When a target is validly cued, precision is increased. What if they are manipulated together? We combined memory and attention manipulations in an orientation judgement task to answer this question. Two circular gratings were presented sequentially or simultaneously. On some trials a brief luminance cue preceded the stimuli. Participants were cued to report the orientation of one of the two gratings by rotating a response grating. We replicated the finding that error variance is reduced on sequential trials. Critically, we found interacting effects of memory and attention. Valid cueing reduced the median, absolute error only when two stimuli appeared together and improved it to the level of performance on uncued sequential trials, whereas invalid cueing always increased error. This effect was not mediated by cue predictiveness; however, predictive cues reduced the standard deviation of the error distribution, whereas nonpredictive cues reduced “guessing”. Our results suggest that, when the demand on memory is greater than a single stimulus, attention is a bottom-up process that prioritizes stimuli for consolidation. Thus attention and memory are synergistic. 相似文献
143.
Leila Kantola Roger P. G. van Gompel 《Quarterly journal of experimental psychology (2006)》2016,69(6):1109-1128
Two experiments investigated whether the choice of anaphoric expression is affected by the presence of an addressee. Following a context sentence and visual scene, participants described a target scene that required anaphoric reference. They described the scene either to an addressee (Experiment 1) or without an addressee (Experiment 2). When an addressee was present in the task, participants used more pronouns and fewer repeated noun phrases when the referent was the grammatical subject in the context sentence than when it was the grammatical object and they used more pronouns when there was no competitor than when there was. They used fewer pronouns and more repeated noun phrases when a visual competitor was present in the scene than when there was no visual competitor. In the absence of an addressee, linguistic context effects were the same as those when an addressee was present, but the visual effect of the competitor disappeared. We conclude that visual salience effects are due to adjustments that speakers make when they produce reference for an addressee, whereas linguistic salience effects appear whether or not speakers have addressees. 相似文献
144.
Isabel Orenes Linda Moxey Christoph Scheepers Carlos Santamaría 《Quarterly journal of experimental psychology (2006)》2016,69(6):1082-1092
Literature assumes that negation is more difficult to understand than affirmation, but this might depend on the pragmatic context. The goal of this paper is to show that pragmatic knowledge modulates the unfolding processing of negation due to the previous activation of the negated situation. To test this, we used the visual world paradigm. In this task, we presented affirmative (e.g., her dad was rich) and negative sentences (e.g., her dad was not poor) while viewing two images of the affirmed and denied entities. The critical sentence in each item was preceded by one of three types of contexts: an inconsistent context (e.g., She supposed that her dad had little savings) that activates the negated situation (a poor man), a consistent context (e.g., She supposed that her dad had enough savings) that activates the actual situation (a rich man), or a neutral context (e.g., her dad lived on the other side of town) that activates neither of the two models previously suggested. The results corroborated our hypothesis. Pragmatics is implicated in the unfolding processing of negation. We found an increase in fixations on the target compared to the baseline for negative sentences at 800?ms in the neutral context, 600?ms in the inconsistent context, and 1450?ms in the consistent context. Thus, when the negated situation has been previously introduced via an inconsistent context, negation is facilitated. 相似文献
145.
We analysed, under laboratory test conditions, how German cockroach larvae oriented their outgoing foraging trip from their
shelter. Our results stressed the importance of external factors, like availability and spatial distribution of food sources,
in the choice of a foraging strategy within their home range. When food sources were randomly distributed, larvae adopted
a random food search strategy. When food distribution was spatially predictable and reliable, cockroaches were able to relate
the presence of food with a landmark during a 3-day training period and to develop an oriented search strategy. Cockroaches
were able to associate learned spatial information about their home range to the presence of food resources and then to improve
their foraging efficiency. However, conflict experiments revealed that detection of food odour overrode learned landmark cues.
Received: 16 October 1999 / Accepted after revision: 18 July 2000 相似文献
146.
One of the fundamental issues in the study of animal cognition concerns categorization. Although domestic dogs (Canis familiaris) are on the brink to become one of the model animals in animal psychology, their categorization abilities are unknown. This
is probably largely due to the absence of an adequate method for testing dogs’ ability to discriminate between large sets
of pictures in the absence of human cueing. Here we present a computer-automated touch-screen testing procedure, which enabled
us to test visual discrimination in dogs while social cueing was ruled out. Using a simultaneous discrimination procedure,
we first trained dogs (N = 4) to differentiate between a set of dog pictures (N = 40) and an equally large set of landscape pictures. All subjects learned to discriminate between the two sets and showed
successful transfer to novel pictures. Interestingly, presentation of pictures providing contradictive information (novel
dog pictures mounted on familiar landscape pictures) did not disrupt performance, which suggests that the dogs made use of
a category-based response rule with classification being coupled to category-relevant features (of the dog) rather than to
item-specific features (of the background). We conclude that dogs are able to classify photographs of natural stimuli by means
of a perceptual response rule using a newly established touch-screen procedure.
Electronic supplementary material The online version of this article (doi:) contains supplementary material, which is available to authorized users. 相似文献
147.
This study demonstrates that associations between colour words and the colours they denote are not mandatory. Experiments 1–3 used a go/no-go task in which participants responded to one print colour and one word and withheld response from another print colour and another word. In Experiment 1, the content of the words denoted noncolour entities. In Experiment 2 the two words denoted two colours that were different from the target print colours. In Experiment 3, the words denoted the same colours as the target print colours but each response set included incompatible print colour and word (e.g., one response to the print colour blue and the word “green” and another response to the print colour green and the word “blue”). Participants performed equally well in all the experiments. Experiment 4a used Arabic digits and words denoting numbers, two formats that are known to have shared representations. Here, participants had difficulties separating their responses to the digits and words. These results suggest that representations of words are distinct from the content that they represent, supporting the existence of distinct verbal and colour modules. 相似文献
148.
ABSTRACTSophisticated machine learning algorithms have been successfully applied to functional neuroimaging data in order to characterize internal cognitive states. But is it possible to “mind-read” without the scanner? Capitalizing on the robust finding that the contents of working memory guide visual attention toward memory-matching objects, we trained a multivariate pattern classifier on behavioural indices of attentional guidance. Working memory representations were successfully decoded from behaviour alone, both within and between individuals. The current study provides a proof-of-concept for applying machine learning techniques to simple behavioural outputs (e.g., response times) in order to decode information about specific internal cognitive states. 相似文献
149.
Clinical signs of damage to the egocentric reference system range from the inability to detect stimuli in the real environment to a defect in recovering items from an internal representation. Despite clinical dissociations, current interpretations consider all symptoms as due to a single perturbation, differentially expressed according to the medium explored (perceptual or representational). We propose an alternative account based on the functional distinction between two separate egocentric mechanisms: one allowing construction of the immediate point of view, the other extracting a required perspective within a mental representation. Support to this claim comes from recent results in the domain of navigation, showing that separate cognitive mechanisms maintain the egocentric reference when actively exploring the visual space as opposed to moving according to an internal map. These mechanisms likely follow separate developmental pathways, seemingly depend on distinct neural pathways and are used independently by healthy adults, reflecting task demands and individual cognitive style. Implications for spatial cognition and social skills are discussed. 相似文献
150.
This study examined self-recognition processing in both the auditory and visual modalities by determining how comparable hearing a recording of one’s own voice was to seeing photograph of one’s own face. We also investigated whether the simultaneous presentation of auditory and visual self-stimuli would either facilitate or inhibit self-identification. Ninety-one participants completed reaction-time tasks of self-recognition when presented with their own faces, own voices, and combinations of the two. Reaction time and errors made when responding with both the right and left hand were recorded to determine if there were lateralization effects on these tasks. Our findings showed that visual self-recognition for facial photographs appears to be superior to auditory self-recognition for voice recordings. Furthermore, a combined presentation of one’s own face and voice appeared to inhibit rather than facilitate self-recognition and there was a left-hand advantage for reaction time on the combined-presentation tasks. 相似文献