首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   220篇
  免费   7篇
  2023年   3篇
  2022年   1篇
  2020年   6篇
  2019年   12篇
  2018年   8篇
  2017年   8篇
  2016年   5篇
  2015年   4篇
  2014年   5篇
  2013年   23篇
  2012年   7篇
  2011年   6篇
  2010年   7篇
  2009年   8篇
  2008年   17篇
  2007年   11篇
  2006年   11篇
  2005年   15篇
  2004年   9篇
  2003年   7篇
  2002年   9篇
  2001年   5篇
  2000年   3篇
  1999年   2篇
  1998年   4篇
  1997年   6篇
  1994年   1篇
  1993年   1篇
  1992年   1篇
  1991年   1篇
  1989年   1篇
  1988年   1篇
  1987年   1篇
  1985年   4篇
  1984年   2篇
  1983年   2篇
  1982年   1篇
  1980年   1篇
  1979年   1篇
  1978年   1篇
  1977年   3篇
  1975年   1篇
  1971年   1篇
  1970年   1篇
排序方式: 共有227条查询结果,搜索用时 15 毫秒
31.
People from diverse backgrounds enrich the rural, regional, and remote communities where they relocate and settle. Research about rural diversity tends to focus on demographics (age, gender, country of origin) while ignoring personal narratives of integration, for example, engagements with religious institutions (such as the local Christian church). This article presents the research themes from an investigation using co-operative inquiry into rural diversity and the Anglican Church, with specific reference to the Australian experience. It is a cross-disciplinary dialogic exchange between social workers and theologians. Positive narratives about connection, welcome, participation, and belonging are shared.  相似文献   
32.
Continuous bag of words (CBOW) and skip-gram are two recently developed models of lexical semantics (Mikolov, Chen, Corrado, & Dean, Advances in Neural Information Processing Systems, 26, 3111–3119, 2013). Each has been demonstrated to perform markedly better at capturing human judgments about semantic relatedness than competing models (e.g., latent semantic analysis; Landauer & Dumais, Psychological Review, 104(2), 1997 211; hyperspace analogue to language; Lund & Burgess, Behavior Research Methods, Instruments, & Computers, 28(2), 203–208, 1996). The new models were largely developed to address practical problems of meaning representation in natural language processing. Consequently, very little attention has been paid to the psychological implications of the performance of these models. We describe the relationship between the learning algorithms employed by these models and Anderson’s rational theory of memory (J. R. Anderson & Milson, Psychological Review, 96(4), 703, 1989) and argue that CBOW is learning word meanings according to Anderson’s concept of needs probability. We also demonstrate that CBOW can account for nearly all of the variation in lexical access measures typically attributable to word frequency and contextual diversity—two measures that are conceptually related to needs probability. These results suggest two conclusions: One, CBOW is a psychologically plausible model of lexical semantics. Two, word frequency and contextual diversity do not capture learning effects but rather memory retrieval effects.  相似文献   
33.
34.
Geoffrey Loftus, Editor of Memory & Cognition from 1994 to 1997, strongly encouraged presentation of figures with error bars and avoidance of null hypothesis significance testing (NHST). The authors examined 696 Memory & Cognition articles published before, during, and after the Loftus editorship. Use of figures with bars increased to 47% under Loftus's editorship and then declined. Bars were rarely used for interpretation, and NHST remained almost universal. Analysis of 309 articles in other psychology journals confirmed that Loftus's influence was most evident in the articles he accepted for publication, but was otherwise limited. An e-mail survey of authors of papers accepted by Loftus revealed some support for his policy, but allegiance to traditional practices as well. Reform of psychologists' statistical practices would require more than editorial encouragement.  相似文献   
35.
We examine the extent to which retrieval from very long-term autobiographical memory is similar when participants are asked to retrieve from widely differing periods of time. Three groups of 20 participants were given 4 min to recall autobiographical events from the last 5 weeks, 5 months, or 5 years. Following recall, the participants dated their events. Similar retrieval rates, relative recency effects, and relative lag-recency effects were found, despite the fact that the considered time scales varied by a factor of 52. These data are broadly consistent with the principle of recency, the principle of contiguity (Howard & Kahana, 2002), and scale similarity in the rates of recall (Brown, Neath, & Chater, 2007; Maylor, Chater, & Brown, 2001). These findings are taken as support for models of memory that predict time scale similarity in retrieval, such as SIMPLE (Brown et al., 2007) and TCM (Howard & Kahana, 2002).  相似文献   
36.
The presentation of a stimulus below the threshold of conscious awareness can exert an influence on the processing of a subsequent target. One such consequence of briefly presented “primes” is seen in the negative compatibility effect. The response time (RT) to determine the left—right orientation of an arrow (i.e., the target) is relatively slow if a prime is also an arrow whose direction corresponds to that of the target. When the direction of the arrow is opposite that of the prime, RTs are relatively fast. In four experiments, we examined whether the prime shifts attention from the location of the subsequent target and whether this attention shift influences target processing. Results showed that the prime does indeed move attention. The consequence of this attention movement is that the representation of direction is affected. Specifically, RTs to process an arrow are shorter if the arrow’s direction is compatible with the last shift of attention. Furthermore, this interference occurs at a conceptual level concerning the representation of left and right rather than at the motor planning level. We argue that a shift in attention brought about by the prime can create a negative compatibility-like effect.  相似文献   
37.
ABSTRACT— Replication is fundamental to science, so statistical analysis should give information about replication. Because p values dominate statistical analysis in psychology, it is important to ask what p says about replication. The answer to this question is "Surprisingly little." In one simulation of 25 repetitions of a typical experiment, p varied from <.001 to .76, thus illustrating that p is a very unreliable measure. This article shows that, if an initial experiment results in two-tailed p = .05, there is an 80% chance the one-tailed p value from a replication will fall in the interval (.00008, .44), a 10% chance that p < .00008, and fully a 10% chance that p > .44. Remarkably, the interval—termed a p interval —is this wide however large the sample size. p is so unreliable and gives such dramatically vague information that it is a poor basis for inference. Confidence intervals, however, give much better information about replication. Researchers should minimize the role of p by using confidence intervals and model-fitting techniques and by adopting meta-analytic thinking.  相似文献   
38.
When comparing two target elements placed on the same convoluted curve, response times are dependent on the distance between the targets along the curve, despite being separated by a constant Euclidean distance. The present study assessed whether such line tracing is obligatory across the whole of the line even when the task demands do not require it, or whether it is an optional strategy that can be disregarded when the circumstances favor a different method of attentional deployment. Three experiments were conducted to assess whether attention can select only a portion of a curve to trace when it is strategically sensible to do so. The results suggest that attention can indeed jump over portions of a line that are irrelevant to task performance before tracing has begun. However, the final experiment suggests that line tracing may continue beyond the task-relevant portion of the line. We conclude that line tracing is a strategy whose initial deployment can be influenced by top-down factors, rather than an obligatory response triggered by the stimuli-although, once engaged, line tracing may be hard to stop.  相似文献   
39.
AGM-theory, named after its founders Carlos Alchourrón, Peter Gärdenfors and David Makinson, is the leading contemporary paradigm in the theory of belief-revision. The theory is reformulated here so as to deal with the central relational notions ‘J is a contraction of K with respect to A’ and ‘J is a revision of K with respect to A’. The new theory is based on a principal-case analysis of the domains of definition of the three main kinds of theory-change (expansion, contraction and revision). The new theory is stated by means of introduction and elimination rules for the relational notions. In this new setting one can re-examine the relationship between contraction and revision, using the appropriate versions of the so-called Levi and Harper identities. Among the positive results are the following. One can derive the extensionality of contraction and revision, rather than merely postulating it. Moreover, one can demonstrate the existence of revision-functions satisfying a principle of monotonicity. The full set of AGM-postulates for revision-functions allow for completely bizarre revisions. This motivates a Principle of Minimal Bloating, which needs to be stated as a separate postulate for revision. Moreover, contractions obtained in the usual way from the bizarre revisions, by using the Harper identity, satisfy Recovery. This provides a new reason (in addition to several others already adduced in the literature) for thinking that the contraction postulate of Recovery fails to capture the Principle of Minimal Mutilation. So the search is still on for a proper explication of the notion of minimal mutilation, to do service in both the theory of contraction and the theory of revision. The new relational formulation of AGM-theory, based on principal-case analysis, shares with the original, functional form of AGM-theory the idealizing assumption that the belief-sets of rational agents are to be modelled as consistent, logically closed sets of sentences. The upshot of the results presented here is that the new relational theory does a better job of making important matters clear than does the original functional theory. A new setting has been provided within which one can profitably address two pressing questions for AGM-theory: (1) how is the notion of minimal mutilation (by both contractions and revisions) best analyzed? and (2) how is one to rule out unnecessary bloating by revisions?  相似文献   
40.
We examined whether the onset of a new object defined by illusory contours is detected with greater frequency than offset when neither is associated with a unique sensory transient. Observers performed a “one-shot” change detection task in which offsetting or onsetting elements of high luminance contrast circles generated the appearance or disappearance of a Kanizsa figure. Presenting “illusory figures” via this “flicker” method ensures that (1) any unique luminance transients associated with the two types of change are eliminated, and (2) the objects themselves can only be represented at a relatively high level. Results showed that offsets were detected more frequently than onsets only when they generated the onset of a Kanizsa figure. We argue that object appearance dominates object disappearance via mechanisms that operate at the level at which objects are constructed.  相似文献   
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号